Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 134
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Q J Exp Psychol (Hove) ; : 17470218241232407, 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38326329

RESUMO

Little is known about how information to the left of fixation impacts reading and how it may help to integrate what has been read into the context of the sentence. To better understand the role of this leftward information and how it may be beneficial during reading, we compared the sizes of the leftward span for reading-matched deaf signers (n = 32) and hearing adults (n = 40) using a gaze-contingent moving window paradigm with windows of 1, 4, 7, 10, and 13 characters to the left, as well as a no-window condition. All deaf participants were prelingually and profoundly deaf, used American Sign Language (ASL) as a primary means of communication, and were exposed to ASL before age eight. Analysis of reading rates indicated that deaf readers had a leftward span of 10 characters, compared to four characters for hearing readers, and the size of the span was positively related to reading comprehension ability for deaf but not hearing readers. These findings suggest that deaf readers may engage in continued word processing of information obtained to the left of fixation, making reading more efficient, and showing a qualitatively different reading process than hearing readers.

2.
J Cogn ; 7(1): 19, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38312942

RESUMO

Grainger et al. (2006) were the first to use ERP masked priming to explore the differing contributions of phonological and orthographic representations to visual word processing. Here we adapted their paradigm to examine word processing in deaf readers. We investigated whether reading-matched deaf and hearing readers (n = 36) exhibit different ERP effects associated with the activation of orthographic and phonological codes during word processing. In a visual masked priming paradigm, participants performed a go/no-go categorization task (detect an occasional animal word). Critical target words were preceded by orthographically-related (transposed letter - TL) or phonologically-related (pseudohomophone - PH) masked non-word primes were contrasted with the same target words preceded by letter substitution (control) non-words primes. Hearing readers exhibited typical N250 and N400 priming effects (greater negativity for control compared to TL or PH primed targets), and the TL and PH priming effects did not differ. For deaf readers, the N250 PH priming effect was later (250-350 ms), and they showed a reversed N250 priming effect for TL primes in this time window. The N400 TL and PH priming effects did not differ between groups. For hearing readers, those with better phonological and spelling skills showed larger early N250 PH and TL priming effects (150-250 ms). For deaf readers, those with better phonological skills showed a larger reversed TL priming effect in the late N250 window. We speculate that phonological knowledge modulates how strongly deaf readers rely on whole-word orthographic representations and/or the mapping from sublexical to lexical representations.

3.
Curr Dir Psychol Sci ; 32(5): 387-394, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37829330

RESUMO

The ten things you should know about sign languages are the following. 1) Sign languages have phonology and poetry. 2) Sign languages vary in their linguistic structure and family history, but share some typological features due to their shared biology (manual production). 3) Although there are many similarities between perceiving and producing speech and sign, the biology of language can impact aspects of processing. 4) Iconicity is pervasive in sign language lexicons and can play a role in language acquisition and processing. 5) Deaf and hard-of-hearing children are at risk for language deprivation. 6) Signers gesture when signing. 7) Sign language experience enhances some visual-spatial skills. 8) The same left hemisphere brain regions support both spoken and sign languages, but some neural regions are specific to sign language. 9) Bimodal bilinguals can code-blend, rather code-switch, which alters the nature of language control. 10) The emergence of new sign languages reveals patterns of language creation and evolution. These discoveries reveal how language modality does and does not affect language structure, acquisition, processing, use, and representation in the brain. Sign languages provide unique insights into human language that cannot be obtained by studying spoken languages alone.

4.
Neurobiol Lang (Camb) ; 4(2): 361-381, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37546690

RESUMO

Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL-English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL-English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.

5.
Lang Cogn Neurosci ; 38(5): 636-650, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37304206

RESUMO

Deaf and hearing readers have different access to spoken phonology which may affect the representation and recognition of written words. We used ERPs to investigate how a matched sample of deaf and hearing adults (total n = 90) responded to lexical characteristics of 480 English words in a go/no-go lexical decision task. Results from mixed effect regression models showed a) visual complexity produced small effects in opposing directions for deaf and hearing readers, b) similar frequency effects, but shifted earlier for deaf readers, c) more pronounced effects of orthographic neighborhood density for hearing readers, and d) more pronounced effects of concreteness for deaf readers. We suggest hearing readers have visual word representations that are more integrated with phonological representations, leading to larger lexically-mediated effects of neighborhood density. Conversely, deaf readers weight other sources of information more heavily, leading to larger semantically-mediated effects and altered responses to low-level visual variables.

6.
Acta Psychol (Amst) ; 236: 103923, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37087958

RESUMO

For sign languages, transitional movements of the hands are fully visible and may be used to predict upcoming linguistic input. We investigated whether and how deaf signers and hearing nonsigners use transitional information to detect a target item in a string of either pseudosigns or grooming gestures, as well as whether motor imagery ability was related to this skill. Transitional information between items was either intact (Normal videos), digitally altered such that the hands were selectively blurred (Blurred videos), or edited to only show the frame prior to the transition which was frozen for the entire transition period, removing all transitional information (Static videos). For both pseudosigns and gestures, signers and nonsigners had faster target detection times for Blurred than Static videos, indicating similar use of movement transition cues. For linguistic stimuli (pseudosigns), only signers made use of transitional handshape information, as evidenced by faster target detection times for Normal than Blurred videos. This result indicates that signers can use their linguistic knowledge to interpret transitional handshapes to predict the upcoming signal. Signers and nonsigners did not differ in motor imagery abilities, but only non-signers exhibited evidence of using motor imagery as a prediction strategy. Overall, these results suggest that signers use transitional movement and handshape cues to facilitate sign recognition.


Assuntos
Gestos , Audição , Humanos , Sinais (Psicologia) , Linguística , Língua de Sinais , Percepção
7.
Neuropsychologia ; 183: 108516, 2023 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-36796720

RESUMO

Prior research has found that iconicity facilitates sign production in picture-naming paradigms and has effects on ERP components. These findings may be explained by two separate hypotheses: (1) a task-specific hypothesis that suggests these effects occur because visual features of the iconic sign form can map onto the visual features of the pictures, and (2) a semantic feature hypothesis that suggests that the retrieval of iconic signs results in greater semantic activation due to the robust representation of sensory-motor semantic features compared to non-iconic signs. To test these two hypotheses, iconic and non-iconic American Sign Language (ASL) signs were elicited from deaf native/early signers using a picture-naming task and an English-to-ASL translation task, while electrophysiological recordings were made. Behavioral facilitation (faster response times) and reduced negativities were observed for iconic signs (both prior to and within the N400 time window), but only in the picture-naming task. No ERP or behavioral differences were found between iconic and non-iconic signs in the translation task. This pattern of results supports the task-specific hypothesis and provides evidence that iconicity only facilitates sign production when the eliciting stimulus and the form of the sign can visually overlap (a picture-sign alignment effect).


Assuntos
Eletrofisiologia , Potenciais Evocados , Modelos Neurológicos , Língua de Sinais , Traduções , Estados Unidos , Tempo de Reação , Estimulação Luminosa , Semântica , Humanos , Surdez/fisiopatologia , Masculino , Feminino , Adulto , Análise de Variância
8.
Neuropsychologia ; 177: 108420, 2022 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-36396091

RESUMO

The role of phonology in word recognition has previously been investigated using a masked lexical decision task and transposed letter (TL) nonwords that were either pronounceable (barve) or unpronounceable (brvae). We used event-related potentials (ERPs) to investigate these effects in skilled deaf readers, who may be more sensitive to orthotactic than phonotactic constraints, which are conflated in English. Twenty deaf and twenty hearing adults completed a masked lexical decision task while ERPs were recorded. The groups were matched in reading skill and IQ, but deaf readers had poorer phonological ability. Deaf readers were faster and more accurate at rejecting TL nonwords than hearing readers. Neither group exhibited an effect of nonword pronounceability in RTs or accuracy. For both groups, the N250 and N400 components were modulated by lexicality (more negative for nonwords). The N250 was not modulated by nonword pronounceability, but pronounceable nonwords elicited a larger amplitude N400 than unpronounceable nonwords. Because pronounceable nonwords are more word-like, they may incite activation that is unresolved when no lexical entry is found, leading to a larger N400 amplitude. Similar N400 pronounceability effects for deaf and hearing readers, despite differences in phonological sensitivity, suggest these TL effects arise from sensitivity to lexical-level orthotactic constraints. Deaf readers may have an advantage in processing TL nonwords because of enhanced early visual attention and/or tight orthographic-to-semantic connections, bypassing the phonologically mediated route to word recognition.


Assuntos
Eletroencefalografia , Potenciais Evocados , Adulto , Humanos , Masculino , Feminino , Potenciais Evocados/fisiologia , Leitura , Semântica , Audição , Fonética
9.
Lang Cogn ; 14(4): 622-644, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36426211

RESUMO

Across sign languages, nouns can be derived from verbs through morphophonological changes in movement by (1) movement reduplication and size reduction or (2) size reduction alone. We asked whether these cross-linguistic similarities arise from cognitive biases in how humans construe objects and actions. We tested nonsigners' sensitivity to differences in noun-verb pairs in American Sign Language (ASL) by asking MTurk workers to match images of actions and objects to videos of ASL noun-verb pairs. Experiment 1a's match-to-sample paradigm revealed that nonsigners interpreted all signs, regardless of lexical class, as actions. The remaining experiments used a forced-matching procedure to avoid this bias. Counter our predictions, nonsigners associated reduplicated movement with actions not objects (inversing the sign language pattern) and exhibited a minimal bias to associate large movements with actions (as found in sign languages). Whether signs had pantomimic iconicity did not alter nonsigners' judgments. We speculate that the morphophonological distinctions in noun-verb pairs observed in sign languages did not emerge as a result of cognitive biases, but rather as a result of the linguistic pressures of a growing lexicon and the use of space for verbal morphology. Such pressures may override an initial bias to map reduplicated movement to actions, but nevertheless reflect new iconic mappings shaped by linguistic and cognitive experiences.

10.
J Deaf Stud Deaf Educ ; 27(4): 355-372, 2022 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-35775152

RESUMO

The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.


Assuntos
Surdez , Língua de Sinais , Adulto , Compreensão , Humanos , Leitura , Semântica
11.
Proc Natl Acad Sci U S A ; 119(28): e2208884119, 2022 07 12.
Artigo em Inglês | MEDLINE | ID: mdl-35767673

Assuntos
Idioma
12.
Cognition ; 220: 104979, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34906848

RESUMO

Form priming has been used to identify and demarcate the processes that underlie word and sign recognition. The facilitation that results from the prime and target being related in form is typically interpreted in terms of pre-activation of linguistic representations, with little to no consideration for the potential contributions of increased perceptual overlap between related pairs. Indeed, isolating the contribution of perceptual similarity is impossible in spoken languages; there are no listeners who can perceive speech but have not acquired a sound-based phonological system. Here, we compared the electrophysiological indices of form priming effects in American Sign Language between hearing non-signers (i.e., who had no visual-manual phonological system) and deaf signers. We reasoned that similarities in priming effects between groups would most likely be perceptual in nature, whereas priming effects that are specific to the signer group would reflect pre-activation of phonological representations. Behavior in the go/no-go repetition detection task was remarkably similar between groups. Priming in a pre-N400 window was also largely similar across groups, consistent with an early effect of perceptual similarity. However, priming effects diverged between groups during the subsequent N400 and post-N400 windows. Signers had more typical form priming effects and were especially attuned to handshape overlap, whereas non-signers did not exhibit an N400 component and were more sensitive to location overlap. We attribute this pattern to an interplay between perceptual similarity and phonological knowledge. Perceptual similarity contributes to early phonological priming effects, while phonological knowledge tunes sensitivity to linguistically relevant dimensions of perceptual similarity.


Assuntos
Eletroencefalografia , Língua de Sinais , Potenciais Evocados/fisiologia , Feminino , Humanos , Linguística , Masculino , Reconhecimento Psicológico
13.
Behav Res Methods ; 54(5): 2502-2521, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-34918219

RESUMO

Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.


Assuntos
Nomes , Língua de Sinais , Humanos , Linguística , Idioma , Tempo de Reação/fisiologia
14.
Psychophysiology ; 59(3): e13975, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34791683

RESUMO

Repetition priming and event-related potentials (ERPs) were used to investigate the time course of sign recognition in deaf users of American Sign Language. Signers performed a go/no-go semantic categorization task to rare probe signs referring to people; critical target items were repeated and unrelated signs. In Experiment 1, ERPs were time-locked either to the onset of the video or to sign onset within the video; in Experiment 2, the same full videos were clipped so that video and sign onset were aligned (removing transitional movements), and ERPs were time-locked to video/sign onset. All analyses revealed an N400 repetition priming effect (less negativity for repeated than unrelated signs) but differed in the timing and/or duration of the N400 effect. Results from Experiment 1 revealed that repetition priming effects began before sign onset within a video, suggesting that signers are sensitive to linguistic information within the transitional movement to sign onset. The timing and duration of the N400 for clipped videos were more parallel to that observed previously for auditorily presented words and was 200 ms shorter than either time-locking analysis from Experiment 1. We conclude that time-locking to full video onset is optimal when early ERP components or sensitivity to transitional movements are of interest and that time-locking to the onset of clipped videos is optimal for priming studies with fluent signers.


Assuntos
Potenciais Evocados/fisiologia , Reconhecimento Psicológico , Priming de Repetição/fisiologia , Semântica , Língua de Sinais , Adulto , Eletroencefalografia , Feminino , Humanos , Linguística , Masculino , Tempo de Reação , Gravação em Vídeo
16.
Brain Lang ; 223: 105044, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34741986

RESUMO

In American Sign Language (ASL) spatial relationships are conveyed by the location of the hands in space, whereas English employs prepositional phrases. Using event-related fMRI, we examined comprehension of perspective-dependent (PD) (left, right) and perspective-independent (PI) (in, on) sentences in ASL and audiovisual English (sentence-picture matching task). In contrast to non-spatial control sentences, PD sentences engaged the superior parietal lobule (SPL) bilaterally for ASL and English, consistent with a previous study with written English. The ASL-English conjunction analysis revealed bilateral SPL activation for PD sentences, but left-lateralized activation for PI sentences. The direct contrast between PD and PI expressions revealed greater SPL activation for PD expressions only for ASL. Increased SPL activation for ASL PD expressions may reflect the mental transformation required to interpret locations in signing space from the signer's viewpoint. Overall, the results suggest both overlapping and distinct neural regions support spatial language comprehension in ASL and English.


Assuntos
Surdez , Língua de Sinais , Compreensão/fisiologia , Humanos , Idioma , Imageamento por Ressonância Magnética , Lobo Parietal/diagnóstico por imagem , Lobo Parietal/fisiologia , Estados Unidos
17.
Neuropsychologia ; 162: 108051, 2021 11 12.
Artigo em Inglês | MEDLINE | ID: mdl-34624260

RESUMO

Event-related potentials (ERPs) were used to explore the effects of iconicity and structural visual alignment between a picture-prime and a sign-target in a picture-sign matching task in American Sign Language (ASL). Half the targets were iconic signs and were presented after a) a matching visually-aligned picture (e.g., the shape and location of the hands in the sign COW align with the depiction of a cow with visible horns), b) a matching visually-nonaligned picture (e.g., the cow's horns were not clearly shown), and c) a non-matching picture (e.g., a picture of a swing instead of a cow). The other half of the targets were filler signs. Trials in the matching condition were responded to faster than those in the non-matching condition and were associated with smaller N400 amplitudes in deaf ASL signers. These effects were also observed for hearing non-signers performing the same task with spoken-English targets. Trials where the picture-prime was aligned with the sign target were responded to faster than non-aligned trials and were associated with a reduced P3 amplitude rather than a reduced N400, suggesting that picture-sign alignment facilitated the decision process, rather than lexical access. These ERP and behavioral effects of alignment were found only for the ASL signers. The results indicate that iconicity effects on sign comprehension may reflect a task-dependent strategic use of iconicity, rather than facilitation of lexical access.


Assuntos
Surdez , Língua de Sinais , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Idioma , Masculino , Semântica , Estados Unidos
18.
J Cogn ; 4(1): 39, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34514310

RESUMO

Environmentally-coupled gestures are defined by Goodwin (2007) as gestures that can only be interpreted by taking into account the physical environment of the speaker. Lexical signs, unlike spoken words, can be also be environmentally-coupled because the visual-manual modality allows for signs to be articulated on or near elements in the environment. The speech articulators are largely hidden from view and do not permit environmental coupling. This commentary provides examples of environmentally-coupled signs, which can only be explained within a language-as-situated approach. However, such expressions are also constrained by internal, systematic properties of language, indicating that both language-as-situated and language-as-system approaches are necessary to account for the non-arbitrary (iconic and indexical) properties of language.

19.
Lang Cogn Neurosci ; 36(7): 840-853, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34485589

RESUMO

The picture word interference (PWI) paradigm and ERPs were used to investigate whether lexical selection in deaf and hearing ASL-English bilinguals occurs via lexical competition or whether the response exclusion hypothesis (REH) for PWI effects is supported. The REH predicts that semantic interference should not occur for bimodal bilinguals because sign and word responses do not compete within an output buffer. Bimodal bilinguals named pictures in ASL, preceded by either a translation equivalent, semantically-related, or unrelated English written word. In both the translation and semantically-related conditions bimodal bilinguals showed facilitation effects: reduced RTs and N400 amplitudes for related compared to unrelated prime conditions. We also observed an unexpected focal left anterior positivity that was stronger in the translation condition, which we speculate may be due to articulatory priming. Overall, the results support the REH and models of bilingual language production that assume lexical selection occurs without competition between languages.

20.
Neuropsychologia ; 161: 108019, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34487737

RESUMO

It is currently unclear to what degree language control, which minimizes non-target language interference and increases the probability of selecting target-language words, is similar for sign-speech (bimodal) bilinguals and spoken language (unimodal) bilinguals. To further investigate the nature of language control processes in bimodal bilinguals, we conducted the first event-related potential (ERP) language switching study with hearing American Sign Language (ASL)-English bilinguals. The results showed a pattern that has not been observed in any unimodal language switching study: a switch-related positivity over anterior sites and a switch-related negativity over posterior sites during ASL production in both early and late time windows. No such pattern was found during English production. We interpret these results as evidence that bimodal bilinguals uniquely engage language control at the level of output modalities.


Assuntos
Multilinguismo , Potenciais Evocados , Humanos , Idioma , Língua de Sinais , Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...