Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
Dev Sci ; 24(6): e13124, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34060185

RESUMO

Visual information conveyed by a speaking face aids speech perception. In addition, children's ability to comprehend visual-only speech (speechreading ability) is related to phonological awareness and reading skills in both deaf and hearing children. We tested whether training speechreading would improve speechreading, phoneme blending, and reading ability in hearing children. Ninety-two hearing 4- to 5-year-old children were randomised into two groups: business-as-usual controls, and an intervention group, who completed three weeks of computerised speechreading training. The intervention group showed greater improvements in speechreading than the control group at post-test both immediately after training and 3 months later. This was the case for both trained and untrained words. There were no group effects on the phonological awareness or single-word reading tasks, although those with the lowest phoneme blending scores did show greater improvements in blending as a result of training. The improvement in speechreading in hearing children following brief training is encouraging. The results are also important in suggesting a hypothesis for future investigation: that a focus on visual speech information may contribute to phonological skills, not only in deaf children but also in hearing children who are at risk of reading difficulties. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=bBdpliGkbkY.


Assuntos
Surdez , Leitura Labial , Pré-Escolar , Audição , Humanos , Fonética , Leitura
2.
Neuroimage ; 209: 116411, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-31857205

RESUMO

Deaf late signers provide a unique perspective on the impact of impoverished early language exposure on the neurobiology of language: insights that cannot be gained from research with hearing people alone. Here we contrast the effect of age of sign language acquisition in hearing and congenitally deaf adults to examine the potential impact of impoverished early language exposure on the neural systems supporting a language learnt later in life. We collected fMRI data from deaf and hearing proficient users (N â€‹= â€‹52) of British Sign Language (BSL), who learnt BSL either early (native) or late (after the age of 15 years) whilst they watched BSL sentences or strings of meaningless nonsense signs. There was a main effect of age of sign language acquisition (late â€‹> â€‹early) across deaf and hearing signers in the occipital segment of the left intraparietal sulcus. This finding suggests that late learners of sign language may rely on visual processing more than early learners, when processing both linguistic and nonsense sign input - regardless of hearing status. Region-of-interest analyses in the posterior superior temporal cortices (STC) showed an effect of age of sign language acquisition that was specific to deaf signers. In the left posterior STC, activation in response to signed sentences was greater in deaf early signers than deaf late signers. Importantly, responses in the left posterior STC in hearing early and late signers did not differ, and were similar to those observed in deaf early signers. These data lend further support to the argument that robust early language experience, whether signed or spoken, is necessary for left posterior STC to show a 'native-like' response to a later learnt language.


Assuntos
Mapeamento Encefálico , Surdez/fisiopatologia , Desenvolvimento da Linguagem , Idioma , Plasticidade Neuronal/fisiologia , Língua de Sinais , Lobo Temporal/fisiologia , Adulto , Fatores Etários , Surdez/congênito , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Reconhecimento Visual de Modelos/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiopatologia , Adulto Jovem
3.
Dev Sci ; 22(1): e12701, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30014580

RESUMO

Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4- to 8-month-old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye-tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.


Assuntos
Multilinguismo , Percepção da Fala/fisiologia , Percepção Visual , Adulto , Atenção , Sinais (Psicologia) , Movimentos Oculares , Face , Feminino , Humanos , Lactente , Masculino , Boca
4.
J Neurosci ; 37(39): 9564-9573, 2017 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-28821674

RESUMO

To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants.SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function.


Assuntos
Percepção Auditiva , Surdez/fisiopatologia , Lateralidade Funcional , Memória de Curto Prazo , Língua de Sinais , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Estudos de Casos e Controles , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Semântica , Lobo Temporal/fisiopatologia , Percepção Visual
5.
Laterality ; 20(1): 49-68, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-24875468

RESUMO

Although there is consensus that the left hemisphere plays a critical role in language processing, some questions remain. Here we examine the influence of overt versus covert speech production on lateralization, the relationship between lateralization and behavioural measures of language performance and the strength of lateralization across the subcomponents of language. The present study used functional transcranial Doppler sonography (fTCD) to investigate lateralization of phonological and semantic fluency during both overt and covert word generation in right-handed adults. The laterality index (LI) was left lateralized in all conditions, and there was no difference in the strength of LI between overt and covert speech. This supports the validity of using overt speech in fTCD studies, another benefit of which is a reliable measure of speech production.


Assuntos
Encéfalo/fisiologia , Lateralidade Funcional , Fonética , Semântica , Adulto , Feminino , Humanos , Testes de Linguagem , Masculino , Pessoa de Meia-Idade , Ultrassonografia Doppler Transcraniana , Adulto Jovem
6.
Neuroimage ; 100: 347-57, 2014 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-24907483

RESUMO

There is evidence of both crossmodal and intermodal plasticity in the deaf brain. Here, we investigated whether sub-cortical plasticity, specifically of the thalamus, contributed to this reorganisation. We contrasted diffusion weighted magnetic resonance imaging data from 13 congenitally deaf and 13 hearing participants, all of whom had learnt British Sign Language after 10 years of age. Connectivity based segmentation of the thalamus revealed changes to mean and radial diffusivity in occipital and frontal regions, which may be linked to enhanced peripheral visual acuity, and differences in how visual attention is deployed in the deaf group. Using probabilistic tractography, tracts were traced between the thalamus and its cortical targets, and microstructural measurements were extracted from these tracts. Group differences were found in microstructural measurements of occipital, frontal, somatosensory, motor and parietal thalamo-cortical tracts. Our findings suggest that there is sub-cortical plasticity in the deaf brain, and that white matter alterations can be found throughout the deaf brain, rather than being restricted to, or focussed in the auditory cortex.


Assuntos
Córtex Cerebral/anatomia & histologia , Surdez/patologia , Imagem de Difusão por Ressonância Magnética/métodos , Plasticidade Neuronal/fisiologia , Tálamo/anatomia & histologia , Adulto , Córtex Cerebral/patologia , Imagem de Tensor de Difusão , Feminino , Humanos , Masculino , Vias Neurais/anatomia & histologia , Vias Neurais/patologia , Tálamo/patologia , Substância Branca/anatomia & histologia , Substância Branca/patologia
7.
Infant Behav Dev ; 76: 101959, 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38781790

RESUMO

Werker and Tees (1984) prompted decades of research attempting to detail the paths infants take towards specialisation for the sounds of their native language(s). Most of this research has examined the trajectories of monolingual children. However, it has also been proposed that bilinguals, who are exposed to greater phonetic variability than monolinguals and must learn the rules of two languages, may remain perceptually open to non-native language sounds later into life than monolinguals. Using a visual habituation paradigm, the current study tests this question by comparing 15- to 18-month-old monolingual and bilingual children's developmental trajectories for non-native phonetic consonant contrast discrimination. A novel approach to the integration of stimulus presentation software with eye-tracking software was validated for objective measurement of infant looking time. The results did not support the hypothesis of a protracted period of sensitivity to non-native phonetic contrasts in bilingual compared to monolingual infants. Implications for diversification of perceptual narrowing research and implementation of increasingly sensitive measures are discussed.

8.
J Cogn Neurosci ; 25(7): 1037-48, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-23448521

RESUMO

We used electrophysiology to determine the time course and distribution of neural activation during an English word rhyme task in hearing and congenitally deaf adults. Behavioral performance by hearing participants was at ceiling and their ERP data replicated two robust effects repeatedly observed in the literature. First, a sustained negativity, termed the contingent negative variation, was elicited following the first stimulus word. This negativity was asymmetric, being more negative over the left than right sites. The second effect we replicated in hearing participants was an enhanced negativity (N450) to nonrhyming second stimulus words. This was largest over medial, parietal regions of the right hemisphere. Accuracy on the rhyme task by the deaf group as a whole was above chance level, yet significantly poorer than hearing participants. We examined only ERP data from deaf participants who performed the task above chance level (n = 9). We observed indications of subtle differences in ERP responses between deaf and hearing groups. However, overall the patterns in the deaf group were very similar to that in the hearing group. Deaf participants, just as hearing participants, showed greater negativity to nonrhyming than rhyming words. Furthermore the onset latency of this effect was the same as that observed in hearing participants. Overall, the neural processes supporting explicit phonological judgments are very similar in deaf and hearing people, despite differences in the modality of spoken language experience. This supports the suggestion that phonological processing is to a large degree amodal or supramodal.


Assuntos
Surdez/fisiopatologia , Potenciais Evocados/fisiologia , Julgamento/fisiologia , Neurobiologia , Periodicidade , Adulto , Análise de Variância , Mapeamento Encefálico , Eletroencefalografia , Feminino , Lateralidade Funcional/fisiologia , Lateralidade Funcional/efeitos da radiação , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Tempo de Reação , Adulto Jovem
9.
Biling (Camb Engl) ; 26(4): 835-844, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37636491

RESUMO

Bilingual infants rely differently than monolinguals on facial information, such as lip patterns, to differentiate their native languages. This may explain, at least in part, why young monolinguals and bilinguals show differences in social attention. For example, in the first year, bilinguals attend faster and more often to static faces over non-faces than do monolinguals (Mercure et al., 2018). However, the developmental trajectories of these differences are unknown. In this pre-registered study, data were collected from 15- to 18-month-old monolinguals (English) and bilinguals (English and another language) to test whether group differences in face-looking behaviour persist into the second year. We predicted that bilinguals would orient more rapidly and more often to static faces than monolinguals. Results supported the first but not the second hypothesis. This suggests that, even into the second year of life, toddlers' rapid visual orientation to static social stimuli is sensitive to early language experience.

10.
Cortex ; 154: 105-134, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35777191

RESUMO

BACKGROUND: Most people have strong left-brain lateralisation for language, with a minority showing right- or bilateral language representation. On some receptive language tasks, however, lateralisation appears to be reduced or absent. This contrasting pattern raises the question of whether and how language laterality may fractionate within individuals. Building on our prior work, we postulated (a) that there can be dissociations in lateralisation of different components of language, and (b) these would be more common in left-handers. A subsidiary hypothesis was that laterality indices will cluster according to two underlying factors corresponding to whether they involve generation of words or sentences, versus receptive language. METHODS: We tested these predictions in two stages: At Step 1 an online laterality battery (Dichotic listening, Rhyme Decision and Word Comprehension) was given to 621 individuals (56% left-handers); At Step 2, functional transcranial Doppler ultrasound (fTCD) was used with 230 of these individuals (51% left-handers). 108 left-handers and 101 right-handers gave useable data on a battery of three language generation and three receptive language tasks. RESULTS: Neither the online nor fTCD measures supported the notion of a single language laterality factor. In general, for both online and fTCD measures, tests of language generation were left-lateralised. In contrast, the receptive tasks were at best weakly left-lateralised or, in the case of Word Comprehension, slightly right-lateralised. The online measures were only weakly correlated, if at all, with fTCD measures. Most of the fTCD measures had split-half reliabilities of at least .7, and showed a distinctive pattern of intercorrelation, supporting a modified two-factor model in which Phonological Decision (generation) and Sentence Decision (reception) loaded on both factors. The same factor structure fitted data from left- and right-handers, but mean scores on the two factors were lower (less left-lateralised) in left-handers. CONCLUSIONS: There are at least two factors influencing language lateralization in individuals, but they do not correspond neatly to language generation and comprehension. Future fMRI studies could help clarify how far they reflect activity in specific brain regions.


Assuntos
Lateralidade Funcional , Idioma , Encéfalo , Circulação Cerebrovascular , Humanos , Ultrassonografia Doppler Transcraniana
12.
Brain ; 132(Pt 7): 1928-40, 2009 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-19467990

RESUMO

Hearing developmental dyslexics and profoundly deaf individuals both have difficulties processing the internal structure of words (phonological processing) and learning to read. In hearing non-impaired readers, the development of phonological representations depends on audition. In hearing dyslexics, many argue, auditory processes may be impaired. In congenitally profoundly deaf individuals, auditory speech processing is essentially absent. Two separate literatures have previously reported enhanced activation in the left inferior frontal gyrus in both deaf and dyslexic adults when contrasted with hearing non-dyslexics during reading or phonological tasks. Here, we used a rhyme judgement task to compare adults from these two special populations to a hearing non-dyslexic control group. All groups were matched on non-verbal intelligence quotient, reading age and rhyme performance. Picture stimuli were used since this requires participants to generate their own phonological representations, rather than have them partially provided via text. By testing well-matched groups of participants on the same task, we aimed to establish whether previous literatures reporting differences between individuals with and without phonological processing difficulties have identified the same regions of differential activation in these two distinct populations. The data indicate greater activation in the deaf and dyslexic groups than in the hearing non-dyslexic group across a large portion of the left inferior frontal gyrus. This includes the pars triangularis, extending superiorly into the middle frontal gyrus and posteriorly to include the pars opercularis, and the junction with the ventral precentral gyrus. Within the left inferior frontal gyrus, there was variability between the two groups with phonological processing difficulties. The superior posterior tip of the left pars opercularis, extending into the precentral gyrus, was activated to a greater extent by deaf than dyslexic participants, whereas the superior posterior portion of the pars triangularis extending into the ventral pars opercularis, was activated to a greater extent by dyslexic than deaf participants. Whether these regions play differing roles in compensating for poor phonological processing is not clear. However, we argue that our main finding of greater inferior frontal gyrus activation in both groups with phonological processing difficulties in contrast to controls suggests greater reliance on the articulatory component of speech during phonological processing when auditory processes are absent (deaf group) or impaired (dyslexic group). Thus, the brain appears to develop a similar solution to a processing problem that has different antecedents in these two populations.


Assuntos
Surdez/fisiopatologia , Dislexia/fisiopatologia , Lobo Frontal/fisiopatologia , Adolescente , Adulto , Mapeamento Encefálico/métodos , Estudos de Casos e Controles , Surdez/congênito , Surdez/psicologia , Dislexia/psicologia , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Fonética , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Comportamento Verbal/fisiologia , Adulto Jovem
13.
iScience ; 23(11): 101650, 2020 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-33103087

RESUMO

When people talk, they move their hands to enhance meaning. Using accelerometry, we measured whether people spontaneously use their artificial limbs (prostheses) to gesture, and whether this behavior relates to everyday prosthesis use and perceived embodiment. Perhaps surprisingly, one- and two-handed participants did not differ in the number of gestures they produced in gesture-facilitating tasks. However, they did differ in their gesture profile. One-handers performed more, and bigger, gesture movements with their intact hand relative to their prosthesis. Importantly, one-handers who gestured more similarly to their two-handed counterparts also used their prosthesis more in everyday life. Although collectively one-handers only marginally agreed that their prosthesis feels like a body part, one-handers who reported they embody their prosthesis also showed greater prosthesis use for communication and daily function. Our findings provide the first empirical link between everyday prosthesis use habits and perceived embodiment and a novel means for implicitly indexing embodiment.

14.
J Speech Lang Hear Res ; 63(11): 3775-3785, 2020 11 13.
Artigo em Inglês | MEDLINE | ID: mdl-33108258

RESUMO

Purpose Speechreading (lipreading) is a correlate of reading ability in both deaf and hearing children. We investigated whether the relationship between speechreading and single-word reading is mediated by phonological awareness in deaf and hearing children. Method In two separate studies, 66 deaf children and 138 hearing children, aged 5-8 years old, were assessed on measures of speechreading, phonological awareness, and single-word reading. We assessed the concurrent relationships between latent variables measuring speechreading, phonological awareness, and single-word reading. Results In both deaf and hearing children, there was a strong relationship between speechreading and single-word reading, which was fully mediated by phonological awareness. Conclusions These results are consistent with ideas from previous studies that visual speech information contributes to the development of phonological representations in both deaf and hearing children, which, in turn, support learning to read. Future longitudinal and training studies are required to establish whether these relationships reflect causal effects.


Assuntos
Surdez , Leitura Labial , Criança , Pré-Escolar , Audição , Humanos , Fonética , Leitura , Vocabulário
15.
Neurobiol Lang (Camb) ; 1(1): 9-32, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32274469

RESUMO

Recent neuroimaging studies suggest that monolingual infants activate a left-lateralized frontotemporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near-infrared spectroscopy (fNIRS) data from 60 hearing infants (4 to 8 months of age): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, whereas sign language elicited activation in the right temporoparietal area. A significant difference in brain lateralization was observed between groups. Activation in the posterior temporal region was not lateralized in monolinguals and bimodal bilinguals, but right lateralized in response to both language modalities in unimodal bilinguals. This suggests that the experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPAs) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language and that unimodal bilingual experience has greater impact on early brain lateralization than bimodal bilingual experience.

16.
Trends Cogn Sci ; 12(11): 432-40, 2008 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-18805728

RESUMO

Most of our knowledge about the neurobiological bases of language comes from studies of spoken languages. By studying signed languages, we can determine whether what we have learnt so far is characteristic of language per se or whether it is specific to languages that are spoken and heard. Overwhelmingly, lesion and neuroimaging studies indicate that the neural systems supporting signed and spoken language are very similar: both involve a predominantly left-lateralised perisylvian network. Recent studies have also highlighted processing differences between languages in these different modalities. These studies provide rich insights into language and communication processes in deaf and hearing people.


Assuntos
Encéfalo/fisiologia , Cognição/fisiologia , Idioma , Língua de Sinais , Encéfalo/anatomia & histologia , Surdez/fisiopatologia , Lateralidade Funcional/fisiologia , Audição/fisiologia , Humanos , Imageamento por Ressonância Magnética
17.
Dev Cogn Neurosci ; 36: 100619, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30711882

RESUMO

The effect of sensory experience on hemispheric specialisation for language production is not well understood. Children born deaf, including those who have cochlear implants, have drastically different perceptual experiences of language than their hearing peers. Using functional transcranial Doppler sonography (fTCD), we measured lateralisation during language production in a heterogeneous group of 19 deaf children and in 19 hearing children, matched on language ability. In children born deaf, we observed significant left lateralisation during language production (British Sign Language, spoken English, or a combination of languages). There was no difference in the strength of lateralisation between deaf and hearing groups. Comparable proportions of children were categorised as left-, right-, or not significantly-lateralised in each group. Moreover, an exploratory subgroup analysis showed no significant difference in lateralisation between deaf children with cochlear implants and those without. These data suggest that the processes underpinning language production remain robustly left lateralised regardless of sensory language experience.


Assuntos
Surdez/fisiopatologia , Dominância Cerebral/fisiologia , Criança , Feminino , Humanos , Idioma , Masculino
18.
Curr Biol ; 29(21): 3739-3747.e5, 2019 11 04.
Artigo em Inglês | MEDLINE | ID: mdl-31668623

RESUMO

Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1-3]. This suggests that conceptual representations are "modality independent." However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes [6]. A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are "language independent." However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world.


Assuntos
Multilinguismo , Semântica , Língua de Sinais , Fala , Adulto , Inglaterra , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
19.
J Speech Lang Hear Res ; 62(8): 2882-2894, 2019 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-31336055

RESUMO

Purpose We developed and evaluated in a randomized controlled trial a computerized speechreading training program to determine (a) whether it is possible to train speechreading in deaf children and (b) whether speechreading training results in improvements in phonological and reading skills. Previous studies indicate a relationship between speechreading and reading skill and further suggest this relationship may be mediated by improved phonological representations. This is important since many deaf children find learning to read to be very challenging. Method Sixty-six deaf 5- to 7-year-olds were randomized into speechreading and maths training arms. Each training program was composed of a 10-min sessions a day, 4 days a week for 12 weeks. Children were assessed on a battery of language and literacy measures before training, immediately after training, and 3 months and 11 months after training. Results We found no significant benefits for participants who completed the speechreading training, compared to those who completed the maths training, on the speechreading primary outcome measure. However, significantly greater gains were observed in the speechreading training group on one of the secondary measures of speechreading. There was also some evidence of beneficial effects of the speechreading training on phonological representations; however, these effects were weaker. No benefits were seen to word reading. Conclusions Speechreading skill is trainable in deaf children. However, to support early reading, training may need to be longer or embedded in a broader literacy program. Nevertheless, a training tool that can improve speechreading is likely to be of great interest to professionals working with deaf children. Supplemental Material https://doi.org/10.23641/asha.8856356.


Assuntos
Linguagem Infantil , Instrução por Computador/métodos , Surdez/reabilitação , Leitura Labial , Educação de Pacientes como Assunto/métodos , Criança , Pré-Escolar , Auxiliares de Comunicação para Pessoas com Deficiência , Surdez/psicologia , Feminino , Humanos , Testes de Linguagem , Alfabetização , Masculino , Fonética , Leitura
20.
Neuropsychologia ; 46(5): 1233-41, 2008 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-18249420

RESUMO

This fMRI study explored the functional neural organisation of seen speech in congenitally deaf native signers and hearing non-signers. Both groups showed extensive activation in perisylvian regions for speechreading words compared to viewing the model at rest. In contrast to earlier findings, activation in left middle and posterior portions of superior temporal cortex, including regions within the lateral sulcus and the superior and middle temporal gyri, was greater for deaf than hearing participants. This activation pattern survived covarying for speechreading skill, which was better in deaf than hearing participants. Furthermore, correlational analysis showed that regions of activation related to speechreading skill varied with the hearing status of the observers. Deaf participants showed a positive correlation between speechreading skill and activation in the middle/posterior superior temporal cortex. In hearing participants, however, more posterior and inferior temporal activation (including fusiform and lingual gyri) was positively correlated with speechreading skill. Together, these findings indicate that activation in the left superior temporal regions for silent speechreading can be modulated by both hearing status and speechreading skill.


Assuntos
Surdez/fisiopatologia , Audição/fisiologia , Leitura Labial , Rede Nervosa/fisiopatologia , Adolescente , Adulto , Análise de Variância , Córtex Auditivo/fisiopatologia , Córtex Cerebral/fisiologia , Interpretação Estatística de Dados , Imagem Ecoplanar , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Língua de Sinais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA