Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Neurosci ; 17: 1218510, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37901437

RESUMO

Introduction: Sensory inference and top-down predictive processing, reflected in human neural activity, play a critical role in higher-order cognitive processes, such as language comprehension. However, the neurobiological bases of predictive processing in higher-order cognitive processes are not well-understood. Methods: This study used electroencephalography (EEG) to track participants' cortical dynamics in response to Austrian Sign Language and reversed sign language videos, measuring neural coherence to optical flow in the visual signal. We then used machine learning to assess entropy-based relevance of specific frequencies and regions of interest to brain state classification accuracy. Results: EEG features highly relevant for classification were distributed across language processing-related regions in Deaf signers (frontal cortex and left hemisphere), while in non-signers such features were concentrated in visual and spatial processing regions. Discussion: The results highlight functional significance of predictive processing time windows for sign language comprehension and biological motion processing, and the role of long-term experience (learning) in minimizing prediction error.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 14052-14054, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37402186

RESUMO

A recent paper claims that a newly proposed method classifies EEG data recorded from subjects viewing ImageNet stimuli better than two prior methods. However, the analysis used to support that claim is based on confounded data. We repeat the analysis on a large new dataset that is free from that confound. Training and testing on aggregated supertrials derived by summing trials demonstrates that the two prior methods achieve statistically significant above-chance accuracy while the newly proposed method does not.

3.
PLoS One ; 17(2): e0262098, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35213558

RESUMO

Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event's underlying representation and its syntactic encoding, such that-for example-the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs' visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.


Assuntos
Surdez/prevenção & controle , Linguística/tendências , Língua de Sinais , Visão Ocular/fisiologia , Surdez/fisiopatologia , Feminino , Dedos/fisiologia , Mãos/fisiologia , Humanos , Julgamento , Masculino , Polegar/fisiologia
4.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9217-9220, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34665721

RESUMO

Neuroimaging experiments in general, and EEG experiments in particular, must take care to avoid confounds. A recent TPAMI paper uses data that suffers from a serious previously reported confound. We demonstrate that their new model and analysis methods do not remedy this confound, and therefore that their claims of high accuracy and neuroscience relevance are invalid.


Assuntos
Algoritmos , Mapeamento Encefálico , Mapeamento Encefálico/métodos , Encéfalo/diagnóstico por imagem , Neuroimagem , Aprendizagem , Eletroencefalografia/métodos
5.
Int J Behav Dev ; 45(5): 397-408, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34690387

RESUMO

Acquisition of natural language has been shown to fundamentally impact both one's ability to use the first language, and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue, because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later - a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range: 28-58 years) with early (0-3 years) or later (4-7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject-object-verb vs. object-subject-verb) were examined in sentences that included: 1) simple sentences, 2) topicalized sentences, and 3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure.

6.
J Exp Psychol Learn Mem Cogn ; 47(6): 998-1011, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33211523

RESUMO

Nonsigners viewing sign language are sometimes able to guess the meaning of signs by relying on the overt connection between form and meaning, or iconicity (cf. Ortega, Özyürek, & Peeters, 2020; Strickland et al., 2015). One word class in sign languages that appears to be highly iconic is classifiers: verb-like signs that can refer to location change or handling. Classifier use and meaning are governed by linguistic rules, yet in comparison with lexical verb signs, classifiers are highly variable in their morpho-phonology (variety of potential handshapes and motion direction within the sign). These open-class linguistic items in sign languages prompt a question about the mechanisms of their processing: Are they part of a gestural-semiotic system (processed like the gestures of nonsigners), or are they processed as linguistic verbs? To examine the psychological mechanisms of classifier comprehension, we recorded the electroencephalogram (EEG) activity of signers who watched videos of signed sentences with classifiers. We manipulated the sentence word order of the stimuli (subject-object-verb [SOV] vs. object-subject-verb [OSV]), contrasting the two conditions, which, according to different processing hypotheses, should incur increased processing costs for OSV orders. As previously reported for lexical signs, we observed an N400 effect for OSV compared with SOV, reflecting increased cognitive load for linguistic processing. These findings support the hypothesis that classifiers are a linguistic part of speech in sign language, extending the current understanding of processing mechanisms at the interface of linguistic form and meaning. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Psicolinguística , Língua de Sinais , Adulto , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
7.
Artigo em Inglês | MEDLINE | ID: mdl-33211652

RESUMO

A recent paper [31] claims to classify brain processing evoked in subjects watching ImageNet stimuli as measured with EEG and to employ a representation derived from this processing to construct a novel object classifier. That paper, together with a series of subsequent papers [11, 18, 20, 24, 25, 30, 34], claims to achieve successful results on a wide variety of computer-vision tasks, including object classification, transfer learning, and generation of images depicting human perception and thought using brain-derived representations measured through EEG. Our novel experiments and analyses demonstrate that their results crucially depend on the block design that they employ, where all stimuli of a given class are presented together, and fail with a rapid-event design, where stimuli of different classes are randomly intermixed. The block design leads to classification of arbitrary brain states based on block-level temporal correlations that are known to exist in all EEG data, rather than stimulus-related activity. Because every trial in their test sets comes from the same block as many trials in the corresponding training sets, their block design thus leads to classifying arbitrary temporal artifacts of the data instead of stimulus-related activity. This invalidates all subsequent analyses performed on this data in multiple published papers and calls into question all of the reported results. We further show that a novel object classifier constructed with a random codebook performs as well as or better than a novel object classifier constructed with the representation extracted from EEG data, suggesting that the performance of their classifier constructed with a representation extracted from EEG data does not benefit from the brain-derived representation. Together, our results illustrate the far-reaching implications of the temporal autocorrelations that exist in all neuroimaging data for classification experiments. Further, our results calibrate the underlying difficulty of the tasks involved and caution against overly optimistic, but incorrect, claims to the contrary.

8.
Wiley Interdiscip Rev Cogn Sci ; 11(1): e1518, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31505710

RESUMO

To understand human language-both spoken and signed-the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long-standing debate. We re-frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality-independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4-5 Hz range) is causally related to language comprehension in both speech and sign language. This modality-independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language. This article is categorized under: Linguistics > Linguistic Theory Linguistics > Language in Mind and Brain Linguistics > Computational Models of Language Psychology > Language.


Assuntos
Comunicação , Linguística , Compreensão , Humanos , Modelos Teóricos , Língua de Sinais
9.
Brain Lang ; 200: 104708, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31698097

RESUMO

One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.


Assuntos
Envelhecimento , Surdez/fisiopatologia , Surdez/psicologia , Eletroencefalografia , Desenvolvimento da Linguagem , Aprendizagem/fisiologia , Linguística , Língua de Sinais , Adulto , Potenciais Evocados , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Plasticidade Neuronal , Vocabulário
10.
Cortex ; 112: 69-79, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30001920

RESUMO

The question of apparent discrepancies in short-term memory capacity for sign language and speech has long presented difficulties for the models of verbal working memory. While short-term memory (STM) capacity for spoken language spans up to 7 ± 2 items, the verbal working memory capacity for sign languages appears to be lower at 5 ± 2. The assumption that both auditory and visual communication (sign language) rely on the same memory buffers led to the claims of impairment of STM buffers in sign language users. Yet, no common model deals with both the sensory and linguistic nature of spoken and sign languages. The authors present a generalized neural model (GNM) of short-term memory use across modalities, which accounts for experimental results in both sign and spoken languages. GNM postulates that during hierarchically organized processing phases in language comprehension, spoken language users rely on neural resources for spatial representation in sequential rehearsal strategy, i.e., the phonological loop. The spatial nature of sign language precludes signers from utilizing a similar 'overflow' strategy, which speakers rely on to extend their STM capacity. This model offers a parsimonious neuroarchitectural explanation for the conflict between spatial and linguistic processing in spoken language, as well as the differences observed in STM capacity for sign and speech.


Assuntos
Percepção Auditiva/fisiologia , Memória de Curto Prazo/fisiologia , Modelos Neurológicos , Língua de Sinais , Fala/fisiologia , Percepção Visual/fisiologia , Humanos , Aprendizagem Verbal/fisiologia
11.
Lang Speech ; 62(4): 652-680, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30354860

RESUMO

Previous studies of Austrian Sign Language (ÖGS) word-order variations have demonstrated the human processing system's tendency to interpret a sentence-initial (case-) ambiguous argument as the subject of the clause ("subject preference"). The electroencephalogram study motivating the current report revealed earlier reanalysis effects for object-subject compared to subject-object sentences, in particular, before the start of the movement of the agreement marking sign. The effects were bound to time points prior to when both arguments were referenced in space and/or the transitional hand movement prior to producing the disambiguating sign. Due to the temporal proximity of these time points, it was not clear which visual cues led to disambiguation; that is, whether non-manual markings (body/shoulder/head shift towards the subject position) or the transitional hand movement resolved ambiguity. The present gating study further supports that disambiguation in ÖGS is triggered by cues occurring before the movement of the disambiguating sign. Further, the present study also confirms the presence of the subject preference in ÖGS, showing again that signers and speakers draw on similar strategies during language processing independent of language modality. Although the ultimate role of the visual cues leading to disambiguation (i.e., non-manual markings and transitional movements) requires further investigation, the present study shows that they contribute crucial information about argument structure during online processing. This finding provides strong support for granting these cues some degree of linguistic status (at least in ÖGS).


Assuntos
Linguística , Movimento , Língua de Sinais , Áustria , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Idioma , Masculino , Estimulação Luminosa , Fatores de Tempo
12.
Brain Res ; 1691: 105-117, 2018 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-29627484

RESUMO

Research on spoken languages has identified a "subject preference" processing strategy for tackling input that is syntactically ambiguous as to whether a sentence-initial NP is a subject or object. The present study documents that the "subject preference" strategy is also seen in the processing of a sign language, supporting the hypothesis that the "subject"-first strategy is universal and not dependent on the language modality (spoken vs. signed). Deaf signers of Austrian Sign Language (ÖGS) were shown videos of locally ambiguous signed sentences in SOV and OSV word orders. Electroencephalogram (EEG) data indicated higher cognitive load in response to OSV stimuli (i.e. a negativity for OSV compared to SOV), indicative of syntactic reanalysis cost. A finding that is specific to the visual modality is that the ERP (event-related potential) effect reflecting linguistic reanalysis occurred earlier than might have been expected, that is, before the time point when the path movement of the disambiguating sign was visible. We suggest that in the visual modality, transitional movement of the articulators prior to the disambiguating verb position or co-occurring non-manual (face/body) markings were used in resolving the local ambiguity in ÖGS. Thus, whereas the processing strategy of "subject preference" is cross-modal at the linguistic level, the cues that enable the processor to apply that strategy differ in signing as compared to speech.


Assuntos
Compreensão/fisiologia , Potenciais Evocados/fisiologia , Linguística , Língua de Sinais , Fala , Encéfalo/fisiologia , Eletroencefalografia , Feminino , Humanos , Idioma , Masculino , Estimulação Luminosa , Tempo de Reação/fisiologia , Fatores de Tempo
13.
Lang Speech ; 61(1): 97-112, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-28565932

RESUMO

The ability to convey information is a fundamental property of communicative signals. For sign languages, which are overtly produced with multiple, completely visible articulators, the question arises as to how the various channels co-ordinate and interact with each other. We analyze motion capture data of American Sign Language (ASL) narratives, and show that the capacity of information throughput, mathematically defined, is highest on the dominant hand (DH). We further demonstrate that information transfer capacity is also significant for the non-dominant hand (NDH), and the head channel too, as compared to control channels (ankles). We discuss both redundancy and independence in articulator motion in sign language, and argue that the NDH and the head articulators contribute to the overall information transfer capacity, indicating that they are neither completely redundant to, nor completely independent of, the DH.


Assuntos
Compreensão , Mãos/fisiologia , Movimento , Língua de Sinais , Percepção Visual , Algoritmos , Lateralidade Funcional , Movimentos da Cabeça , Humanos , Processamento de Imagem Assistida por Computador , Fatores de Tempo , Gravação em Vídeo
14.
Cognition ; 150: 77-84, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26872248

RESUMO

Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.


Assuntos
Emoções/fisiologia , Expressão Facial , Julgamento , Estimulação Luminosa/métodos , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
J Deaf Stud Deaf Educ ; 21(2): 156-70, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26864688

RESUMO

There has been a scarcity of studies exploring the influence of students' American Sign Language (ASL) proficiency on their academic achievement in ASL/English bilingual programs. The aim of this study was to determine the effects of ASL proficiency on reading comprehension skills and academic achievement of 85 deaf or hard-of-hearing signing students. Two subgroups, differing in ASL proficiency, were compared on the Northwest Evaluation Association Measures of Academic Progress and the reading comprehension subtest of the Stanford Achievement Test, 10th edition. Findings suggested that students highly proficient in ASL outperformed their less proficient peers in nationally standardized measures of reading comprehension, English language use, and mathematics. Moreover, a regression model consisting of 5 predictors including variables regarding education, hearing devices, and secondary disabilities as well as ASL proficiency and home language showed that ASL proficiency was the single variable significantly predicting results on all outcome measures. This study calls for a paradigm shift in thinking about deaf education by focusing on characteristics shared among successful deaf signing readers, specifically ASL fluency.


Assuntos
Escolaridade , Perda Auditiva/psicologia , Multilinguismo , Pessoas com Deficiência Auditiva/psicologia , Língua de Sinais , Compreensão , Educação de Pessoas com Deficiência Auditiva , Humanos , Matemática/educação , Leitura , Análise de Regressão , Fatores de Risco
16.
PeerJ ; 2: e446, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25024915

RESUMO

Prior studies investigating cortical processing in Deaf signers suggest that life-long experience with sign language and/or auditory deprivation may alter the brain's anatomical structure and the function of brain regions typically recruited for auditory processing (Emmorey et al., 2010; Pénicaud et al., 2013 inter alia). We report the first investigation of the task-negative network in Deaf signers and its functional connectivity-the temporal correlations among spatially remote neurophysiological events. We show that Deaf signers manifest increased functional connectivity between posterior cingulate/precuneus and left medial temporal gyrus (MTG), but also inferior parietal lobe and medial temporal gyrus in the right hemisphere- areas that have been found to show functional recruitment specifically during sign language processing. These findings suggest that the organization of the brain at the level of inter-network connectivity is likely affected by experience with processing visual language, although sensory deprivation could be another source of the difference. We hypothesize that connectivity alterations in the task negative network reflect predictive/automatized processing of the visual signal.

17.
PLoS One ; 9(2): e86268, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24516528

RESUMO

To fully define the grammar of American Sign Language (ASL), a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down) occurring at the end of the movement of another facial feature (e.g., brows moves up), may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up) can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic-computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames). Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences--Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions--plus their polarities--positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches.


Assuntos
Manuais como Assunto , Língua de Sinais , Simulação por Computador , Análise Discriminante , Humanos , Software , Fatores de Tempo , Estados Unidos , Gravação em Vídeo
18.
J Speech Lang Hear Res ; 56(5): 1677-88, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23926292

RESUMO

PURPOSE: Sign language users recruit physical properties of visual motion to convey linguistic information. Research on American Sign Language (ASL) indicates that signers systematically use kinematic features (e.g., velocity, deceleration) of dominant hand motion for distinguishing specific semantic properties of verb classes in production ( Malaia & Wilbur, 2012a) and process these distinctions as part of the phonological structure of these verb classes in comprehension ( Malaia, Ranaweera, Wilbur, & Talavage, 2012). These studies are driven by the event visibility hypothesis by Wilbur (2003), who proposed that such use of kinematic features should be universal to sign language (SL) by the grammaticalization of physics and geometry for linguistic purposes. In a prior motion capture study, Malaia and Wilbur (2012a) lent support for the event visibility hypothesis in ASL, but there has not been quantitative data from other SLs to test the generalization to other languages. METHOD: The authors investigated the kinematic parameters of predicates in Croatian Sign Language ( Hrvatskom Znakovnom Jeziku [HZJ]). RESULTS: Kinematic features of verb signs were affected both by event structure of the predicate (semantics) and phrase position within the sentence (prosody). CONCLUSION: The data demonstrate that kinematic features of motion in HZJ verb signs are recruited to convey morphological and prosodic information. This is the first crosslinguistic motion capture confirmation that specific kinematic properties of articulator motion are grammaticalized in other SLs to express linguistic features.


Assuntos
Surdez/reabilitação , Movimento , Fonética , Semântica , Língua de Sinais , Vocabulário , Fenômenos Biomecânicos , Mãos , Humanos , Idioma , Análise Multivariada
19.
Lang Speech ; 55(Pt 3): 407-21, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23094321

RESUMO

This article presents an experimental investigation of kinematics of verb sign production in American Sign Language (ASL) using motion capture data. The results confirm that event structure differences in the meaning of the verbs are reflected in the kinematic formation: for example, in the telic verbs (THROW, HIT), the end-point of the event is marked in the verb sign movement by significantly greater deceleration, as compared to atelic verbs (SWIM, TRAVEL). This end-point marker is highly robust regardless of position of the verb in the sentence (medial vs. final), although other prominent kinematic measures, including sign duration and peak speed of dominant hand motion within the sign, are affected by prosodic processes such as Phrase Final Lengthening. The study provides the first kinematic confirmation that event structure is expressed in movement profiles of ASL verbs, up to now only supported by apparent perceptual distinctions. The findings raise further questions about the psychology of event representation both in human languages and in the human mind.


Assuntos
Fenômenos Biomecânicos , Fonética , Semântica , Língua de Sinais , Adulto , Feminino , Humanos , Linguística , Masculino , Adulto Jovem
20.
Neuroimage ; 59(4): 4094-101, 2012 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-22032944

RESUMO

Motion capture studies show that American Sign Language (ASL) signers distinguish end-points in telic verb signs by means of marked hand articulator motion, which rapidly decelerates to a stop at the end of these signs, as compared to atelic signs (Malaia and Wilbur, in press). Non-signers also show sensitivity to velocity in deceleration cues for event segmentation in visual scenes (Zacks et al., 2010; Zacks et al., 2006), introducing the question of whether the neural regions used by ASL signers for sign language verb processing might be similar to those used by non-signers for event segmentation. The present study investigated the neural substrate of predicate perception and linguistic processing in ASL. Observed patterns of activation demonstrate that Deaf signers process telic verb signs as having higher phonological complexity as compared to atelic verb signs. These results, together with previous neuroimaging data on spoken and sign languages (Shetreet et al., 2010; Emmorey et al., 2009), illustrate a route for how a prominent perceptual-kinematic feature used for non-linguistic event segmentation might come to be processed as an abstract linguistic feature due to sign language exposure.


Assuntos
Encéfalo/fisiologia , Imageamento por Ressonância Magnética , Percepção de Movimento/fisiologia , Semântica , Língua de Sinais , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA