RESUMO
Spoken language, both perception and production, is thought to be facilitated by an ensemble of predictive mechanisms. We obtain intracranial recordings in 37 patients using depth probes implanted along the anteroposterior extent of the supratemporal plane during rhythm listening, speech perception, and speech production. These reveal two predictive mechanisms in early auditory cortex with distinct anatomical and functional characteristics. The first, localized to bilateral Heschl's gyri and indexed by low-frequency phase, predicts the timing of acoustic events. The second, localized to planum temporale only in language-dominant cortex and indexed by high-gamma power, shows a transient response to acoustic stimuli that is uniquely suppressed during speech production. Chronometric stimulation of Heschl's gyrus selectively disrupts speech perception, while stimulation of planum temporale selectively disrupts speech production. This work illuminates the fundamental acoustic infrastructure-both architecture and function-for spoken language, grounding cognitive models of speech perception and production in human neurobiology.
Assuntos
Córtex Auditivo/fisiopatologia , Epilepsia/fisiopatologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Mapeamento Encefálico , Epilepsia/diagnóstico por imagem , Epilepsia/psicologia , Feminino , Humanos , Idioma , Imageamento por Ressonância Magnética , Masculino , Fala , Percepção da Fala , Adulto JovemRESUMO
To what extent is the neural organization of language dependent on factors specific to the modalities in which language is perceived and through which it is produced? That is, is the left-hemisphere dominance for language a function of a linguistic specialization or a function of some domain-general specialization(s), such as temporal processing or motor planning? Investigations of the neurobiology of signed language can help answer these questions. As with spoken languages, signed languages of the deaf display complex grammatical structure but are perceived and produced via radically different modalities. Thus, by mapping out the neurological similarities and differences between signed and spoken language, it is possible to identify modality-specific contributions to brain organization for language. Research to date has shown a significant degree of similarity in the neurobiology of signed and spoken languages, suggesting that the neural organization of language is largely modality-independent.
RESUMO
We report on a right-handed, deaf, life long signer who suffered a left posterior cerebral artery (PCA) stroke. The patient presented with right homonymous hemianopia, alexia and a severe sign comprehension deficit. Her production of sign language was, however, virtually normal. We suggest that her syndrome can be characterized as a case of 'sign blindness', a disconnection of the intact right hemisphere visual areas from intact left hemisphere language areas. This case provides strong evidence that the neural systems supporting sign language processing are predominantly in the left hemisphere, but also suggests that there are some differences in the neural organization of signed vs spoken language within the left hemisphere.
Assuntos
Lateralidade Funcional/fisiologia , Hemianopsia/fisiopatologia , Lobo Occipital/fisiopatologia , Língua de Sinais , Percepção Espacial , Feminino , Hemianopsia/diagnóstico , Humanos , Pessoa de Meia-Idade , Lobo Temporal/fisiopatologia , Tomografia Computadorizada por Raios XRESUMO
American sign language (ASL) uses space itself to encode spatial information. Spatial scenes are most often described from the perspective of the person signing (the 'narrator'), such that the viewer must perform what amounts to a 180 degrees mental rotation to correctly comprehend the description. But scenes can also be described, non-canonically, from the viewer's perspective, in which case no rotation is required. Is mental rotation during sign language processing difficult for ASL signers? Are there differences between linguistic and non-linguistic mental rotation? Experiment 1 required subjects to decide whether a signed description matched a room presented on videotape. Deaf ASL signers were more accurate when viewing scenes described from the narrator's perspective (even though rotation is required) than from the viewer's perspective (no rotation required). In Experiment 2, deaf signers and hearing non-signers viewed videotapes of objects appearing briefly and sequentially on a board marked with an entrance. This board either matched an identical board in front of the subject or was rotated 180 degrees. Subjects were asked to place objects on their board in the orientation and location shown on the video, making the appropriate rotation when required. All subjects were significantly less accurate when rotation was required, but ASL signers performed significantly better than hearing non-signers under rotation. ASL signers were also more accurate in remembering object orientation. Signers then viewed a video in which the same scenes were signed from the two perspectives (i.e. rotation required or no rotation required). In contrast to their performance with real objects, signers did not show the typical mental rotation effect. Males outperformed females on the rotation task with objects, but the superiority disappeared in the linguistic condition. We discuss the nature of the ASL mental rotation transformation, and we conclude that habitual use of ASL can enhance non-linguistic cognitive processes thus providing evidence for (a form of) the linguistic relativity hypothesis.
Assuntos
Processos Mentais/fisiologia , Rotação , Língua de Sinais , Adulto , Surdez , Feminino , Humanos , Linguística , Masculino , Estados UnidosRESUMO
Cuetos and Mitchell (1988) observed that in constructions in which a relative clause can attach to one of two possible sites, English speakers prefer the more recent attachment site, but Spanish speakers prefer the least recent attachment site, in violation of the proposed universal principle Late Closure (Recency Preference), which favors attachments to the most recent sites. Based on this evidence, Cuetos and Mitchell concluded that Late Closure is not a universal principle of the human sentence processing mechanism. In this paper, we provide new evidence from Spanish and English self-paced reading experiments on relative clause attachment ambiguities that involve three possible attachment sites. The results of our experiments suggest that a principle like Late Closure is in fact universally operative in the human parser, but that it is modulated by at least one other factor in the processing of relative clause attachment ambiguities. We propose that the second factor involved in the processing of these and related constructions is the principle of Predicate Proximity, according to which attachments are preferred to be as structurally close to the head of a predicate phrase as possible, and we further consider the origins and predictions of the theory combining these two factors.
Assuntos
Cognição , Idioma , Hispânico ou Latino , Humanos , México , Estados Unidos/etnologiaRESUMO
Recent neuropsychological and functional imaging evidence has suggested a role for anterior temporal cortex in sentence-level comprehension. We explored this hypothesis using event-related fMRI. Subjects were scanned while they listened to either a sequence of environmental sounds describing an event or a corresponding sentence matched as closely as possible in meaning. Both types of stimuli required subjects to integrate auditory information over time to derive a similar meaning, but differ in the processing mechanisms leading to the integration of that information, with speech input requiring syntactic mechanisms and environmental sounds utilizing non-linguistic mechanisms. Consistent with recent claims, sentences produced greater activation than environmental sounds in anterior superior temporal lobe bilaterally. A similar speech > sound activation pattern was noted also in posterior superior temporal regions in the left. Envirornmental sounds produced greater activation than sentences in right inferior frontal gyrus. The results provide support for the view that anterior temporal cortex plays an important role in sentence-level comprehension.
Assuntos
Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adulto , Dominância Cerebral/fisiologia , Meio Ambiente , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Processos Mentais/fisiologia , SomRESUMO
Magnetoencephalography (MEG) was used to investigate the response to speech sounds that differ in onset dynamics, parameterized as words that have initial stop consonants (e.g., /b/, /t/) or do not (e.g., /m/, /f/). Latency and amplitude of the M100 auditory evoked neuromagnetic field, recorded over right and left auditory cortices, varied as a function of onset: stops had shorter latencies and higher amplitudes than no-stops in both hemispheres, consistent with the hypothesis that M100 is a sensitive indicator of spectral properties of acoustic stimuli. Further, activation patterns in response to stops/no-stops differed in the two hemispheres, possibly reflecting differential perceptual processing for the acoustic-phonetic cues at the onset of spoken words.
Assuntos
Potenciais Evocados Auditivos/fisiologia , Tempo de Reação/fisiologia , Acústica da Fala , Adulto , Análise de Variância , Humanos , Magnetoencefalografia , MasculinoRESUMO
Conduction aphasia, characterized by good auditory comprehension and fluent but disordered speech production, is classically viewed as a disconnection syndrome. We review recent evidence which suggests that at least one form of conduction aphasia results from damage to cortical fields in the left posterior superior temporal gyrus which participate not only in speech perception, but also in phonemic aspects of speech production. As a test of this hypothesis, we carried out a 4T functional magnetic resonance imaging study in which subjects named visually presented objects sub-vocally. Group-based analyses showed that a majority of participants showed activation in two regions on the dorsal portion of the left posterior superior temporal gyrus.
Assuntos
Afasia de Condução/fisiopatologia , Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Dominância Cerebral/fisiologia , Feminino , Humanos , Masculino , Transtornos da Memória/fisiopatologiaRESUMO
A number of investigators have argued that agrammatic comprehension, the pattern of sentence comprehension often associated with Broca's aphasia, can be characterized in terms of a representational disruption in one or another module of the normal grammar. In this study, these proposals are reviewed and their adequacy is examined in light of two case studies of agrammatic comprehension. In particular, we present data from sentences that have composed the core of the agrammatic comprehension pattern, as well as data from three different classes of sentences including comprehension of the matrix clause of center-embedded relative constructions, pronoun and anaphor dependencies, and Wh-questions. Our conclusion is that none of the existing representational models provides a fully adequate account of the data, and we propose some alternative approaches that distinguish between referential and nonreferential elements and potential processing differences between the two.
Assuntos
Encéfalo/fisiopatologia , Transtornos Cerebrovasculares/complicações , Transtornos Cerebrovasculares/fisiopatologia , Transtornos da Linguagem/etiologia , Percepção da Fala , Idoso , Lateralidade Funcional , Humanos , Transtornos da Linguagem/fisiopatologia , Testes de Linguagem , MasculinoRESUMO
This study investigated comprehension of wh-questions in two Broca's aphasics. Patients were presented for comprehension with two types of wh-questions: questions headed by which and questions headed by who. These two types were chosen because according to recent syntactic analyses they give rise to different types of syntactic "chains." These questions were presented in both subject gap versions (e.g., which cat chased the dog?) and object gap versions (e.g., which cat did the dog chase?). Comprehension of which questions was asymmetric, with subject gap versions comprehended significantly better than object gap versions, the latter yielding chance-level performance. This finding is consistent with previous reports of subject-object asymmetries in comprehension of relative clauses and clefts, as well as active-passive comprehension asymmetries. In contrast, comprehension of who questions was symmetrical over subject gap and object gap versions: Both patients performed equally well (significantly better than chance) on subject gap and object gap who questions. These findings are inconsistent with current formulations of "chain" or "trace"-based theories of agrammatic comprehension which assume a deficit that affects both types of syntactic chains. We suggest that linguistic descriptions of agrammatic comprehension should be limited to deficits involving only one type of chain. We also suggest that there are processing differences underlying the syntactic distinctions between which-type and who-type questions and that this may account for different patterns of comprehension on these and other constructions.
Assuntos
Afasia de Broca/fisiopatologia , Encéfalo/fisiopatologia , Idioma , Percepção da Fala , Idoso , Humanos , Masculino , SemânticaRESUMO
Previous work has shown that deficits in the production and perception of signed language are linked to left hemisphere damage but not right hemisphere damage in deaf lifelong signers, whereas severe deficits in nonlinguistic visuospatial abilities are more frequent following right hemisphere damage than left hemisphere damage in this population. In the present study we investigated the extent to which sign language deficits in deaf individuals can be dissociated from more subtle visuospatial deficits commonly associated with left hemisphere damage in the hearing/speaking population. A group of left- or right-lesioned deaf signers were asked to reproduce (1) two line drawings (a house and an elephant) and (2) four hierarchical figures. Drawings were scored separately for the presence of local vs global features. Consistent with data from hearing patients, left hemisphere-damaged deaf subjects were significantly better at reproducing global-level features, whereas right hemisphere-damaged deaf subjects were significantly better at reproducing local-level features. This effect held for both types of stimuli. Local-level performance in the LHD group did not correlate with performance on sign language tasks, suggesting that language deficits in LHD deaf signers are in fact linguistic specific.
Assuntos
Afasia/patologia , Encéfalo/patologia , Surdez , Lateralidade Funcional , Língua de Sinais , Percepção Visual/fisiologia , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
The trace-deletion hypothesis (Grodzinsky, 1990) holds that the comprehension deficit apparent in most agrammatic aphasics results from the absence of traces at the level of S-structure. This paper reports a test of this hypothesis in a case study of an agrammatic aphasic. Two experiments--one using a sentence-picture matching task, one using the truth-value judgment task-examined the comprehension of the matrix clause in center-embedded relatives such as, The tiger that chased the lion is big. These structures provide a crucial test of the trace-deletion hypothesis because comprehension of the matrix clause (i.e., knowing that the tiger is big and not the lion) is predicted to be unimpaired. Contrary to this prediction, however, the results of the present work show that comprehension of the matrix clause in such sentences is significantly impaired. We argue that a revised version of the trace-deletion hypothesis proposed by Hickok (1992a,b) can explain the present data and other previously unaccountable findings.
Assuntos
Afasia de Broca/complicações , Transtornos da Linguagem/etiologia , Idoso , Afasia de Broca/fisiopatologia , Encéfalo/fisiopatologia , Feminino , Humanos , Transtornos da Linguagem/fisiopatologia , Masculino , Percepção da Fala , Análise e Desempenho de TarefasRESUMO
Previous findings have demonstrated that hemispheric organization in deaf users of American Sign Language (ASL) parallels that of the hearing population, with the left hemisphere showing dominance for grammatical linguistic functions and the right hemisphere showing specialization for non-linguistic spatial functions. The present study addresses two further questions: first, do extra-grammatical discourse functions in deaf signers show the same right-hemisphere dominance observed for discourse functions in hearing subjects; and second, do discourse functions in ASL that employ spatial relations depend upon more general intact spatial cognitive abilities? We report findings from two right-hemisphere damaged deaf signers, both of whom show disruption of discourse functions in absence of any disruption of grammatical functions. The exact nature of the disruption differs for the two subjects, however. Subject AR shows difficulty in maintaining topical coherence, while SJ shows difficulty in employing spatial discourse devices. Further, the two subjects are equally impaired on non-linguistic spatial tasks, indicating that spared spatial discourse functions can occur even when more general spatial cognition is disrupted. We conclude that, as in the hearing population, discourse functions involve the right hemisphere; that distinct discourse functions can be dissociated from one another in ASL; and that brain organization for linguistic spatial devices is driven by its functional role in language processing, rather than by its surface, spatial characteristics.
Assuntos
Encefalopatias/complicações , Transtornos da Comunicação/etiologia , Surdez/complicações , Lateralidade Funcional/fisiologia , Língua de Sinais , Idoso , Transtornos da Comunicação/diagnóstico , Humanos , MasculinoAssuntos
Córtex Cerebral/fisiologia , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Adulto , Nível de Alerta/fisiologia , Atenção/fisiologia , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Córtex Cerebral/irrigação sanguínea , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Fluxo Sanguíneo Regional/fisiologiaRESUMO
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated words better than one would expect if their speech perception system had been largely destroyed (70-80% accuracy). Further, when comprehension fails in such patients their errors are more often semantically-based, than-phonemically based. The question addressed by the present study is whether this ability of the right hemisphere to process speech sounds is a result of plastic reorganization following chronic left hemisphere damage, or whether the ability exists in undamaged language systems. We sought to test these possibilities by studying auditory comprehension in acute left versus right hemisphere deactivation during Wada procedures. A series of 20 patients undergoing clinically indicated Wada procedures were asked to listen to an auditorily presented stimulus word, and then point to its matching picture on a card that contained the target picture, a semantic foil, a phonemic foil, and an unrelated foil. This task was performed under three conditions, baseline, during left carotid injection of sodium amytal, and during right carotid injection of sodium amytal. Overall, left hemisphere injection led to a significantly higher error rate than right hemisphere injection. However, consistent with lesion work, the majority (75%) of these errors were semantic in nature. These findings suggest that auditory comprehension deficits are predominantly semantic in nature, even following acute left hemisphere disruption. This, in turn, supports the hypothesis that the right hemisphere is capable of speech sound processing in the intact brain.
Assuntos
Percepção Auditiva/fisiologia , Compreensão/fisiologia , Lateralidade Funcional/fisiologia , Semântica , Percepção da Fala/fisiologia , Adolescente , Adulto , Amobarbital/administração & dosagem , Amobarbital/farmacologia , Braço/inervação , Epilepsia/fisiopatologia , Epilepsia/psicologia , Feminino , Humanos , Injeções Intra-Arteriais , Idioma , Testes de Linguagem/estatística & dados numéricos , Masculino , Pessoa de Meia-Idade , Desempenho Psicomotor/efeitos dos fármacos , Desempenho Psicomotor/fisiologia , Reconhecimento Psicológico/fisiologia , Lobo Temporal/efeitos dos fármacos , Lobo Temporal/patologia , Lobo Temporal/fisiopatologia , Adulto JovemRESUMO
This paper presents evidence for a new model of the functional anatomy of speech/language (Hickok & Poeppel, 2000) which has, at its core, three central claims: (1) Neural systems supporting the perception of sublexical aspects of speech are essentially bilaterally organized in posterior superior temporal lobe regions; (2) neural systems supporting the production of phonemic aspects of speech comprise a network of predominately left hemisphere systems which includes not only frontal regions, but also superior temporal lobe regions; and (3) the neural systems supporting speech perception and production partially overlap in left superior temporal lobe. This model, which postulates nonidentical but partially overlapping systems involved in the perception and production of speech, explains why psycho- and neurolinguistic evidence is mixed regarding the question of whether input and output phonological systems involve a common network or distinct networks.