Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 115
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Infant Child Dev ; 33(1)2024.
Artigo em Inglês | MEDLINE | ID: mdl-38425545

RESUMO

Open science practices, such as pre-registration and data sharing, increase transparency and may improve the replicability of developmental science. However, developmental science has lagged behind other fields in implementing open science practices. This lag may arise from unique challenges and considerations of longitudinal research. In this paper, preliminary guidelines are provided for adapting open science practices to longitudinal research to facilitate researchers' use of these practices. The guidelines propose a serial and modular approach to registration that includes an initial pre-registration of the methods and focal hypotheses of the longitudinal study, along with subsequent pre- or co-registered questions, hypotheses, and analysis plans associated with specific papers. Researchers are encouraged to share their research materials and relevant data with associated papers, and to report sufficient information for replicability. In addition, there should be careful consideration about requirements regarding the timing of data sharing, to avoid disincentivizing longitudinal research.

2.
Ear Hear ; 44(3): 572-587, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36542839

RESUMO

OBJECTIVES: The ability to adapt to subtle variations in acoustic input is a necessary skill for successful speech perception. Cochlear implant (CI) users tend to show speech perception benefits from the maintenance of their residual acoustic hearing. However, previous studies often compare CI users in different listening conditions within-subjects (i.e., in their typical Acoustic + Electric configuration compared with Acoustic-only or Electric-only configurations) and comparisons among different groups of CI users do not always reflect an Acoustic + Electric benefit. Existing work suggests that CI users with residual acoustic hearing perform similarly to Electric-only listeners on phonetic voicing contrasts and unexpectedly poorer with fricative contrasts which have little energy in the range of the Acoustic + Electric listeners' acoustic hearing. To further investigate how residual acoustic hearing impacts sensitivity to phonetic ambiguity, we examined whether device configuration, age, and device experience influenced phonetic categorization in a large individual differences study. DESIGN: CI users with various device configurations (Electric-only N = 41; Acoustic + Electric N = 95) categorized tokens from five /b-p/ and five /s-ʃ/ minimal pair continua (e.g., bet-pet; sock-shock). We investigated age, device experience, and when applicable, residual acoustic hearing (pure tone hearing thresholds) as predictors of categorization. We also examined the relationship between phonetic categorization and clinical outcomes (CNC, AzBio) in a subset of our sample. RESULTS: Acoustic + Electric CI users were better able to categorize along the voicing contrast (steeper categorization slope) compared with Electric-only users, but there was no group-level difference for fricatives. There were differences within the subgroups for fricatives: bilateral users showed better categorization than unilateral users and bimodal users had better categorization than hybrid users. Age was a significant factor for voicing, while device experience was significant for fricatives. Critically, within the Acoustic + Electric group, hybrid CI users had shallower slopes than bimodal CI users. CONCLUSIONS: Our findings suggest residual acoustic hearing is beneficial for categorizing stop voicing, but not frication. Age impacts the categorization of voicing, while device experience matters for fricatives. For CI users with ipsilateral residual acoustic hearing, those with better hearing thresholds may be over-relying on their acoustic hearing rather than extracting as much information as possible from their CI, and thus have shallower fricative categorization.


Assuntos
Implante Coclear , Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Humanos , Fala , Demografia , Estimulação Acústica
3.
Ear Hear ; 44(2): 338-357, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36253909

RESUMO

OBJECTIVE: The objective of this study was to characterize the dynamics of real-time lexical access, including lexical competition among phonologically similar words, and spreading semantic activation in school-age children with hearing aids (HAs) and children with cochlear implants (CIs). We hypothesized that developing spoken language via degraded auditory input would lead children with HAs or CIs to adapt their approach to spoken word recognition, especially by slowing down lexical access. DESIGN: Participants were children ages 9- to 12-years old with normal hearing (NH), HAs, or CIs. Participants completed a Visual World Paradigm task in which they heard a spoken word and selected the matching picture from four options. Competitor items were either phonologically similar, semantically similar, or unrelated to the target word. As the target word unfolded, children's fixations to the target word, cohort competitor, rhyme competitor, semantically related item, and unrelated item were recorded as indices of ongoing lexical access and spreading semantic activation. RESULTS: Children with HAs and children with CIs showed slower fixations to the target, reduced fixations to the cohort competitor, and increased fixations to the rhyme competitor, relative to children with NH. This wait-and-see profile was more pronounced in the children with CIs than the children with HAs. Children with HAs and children with CIs also showed delayed fixations to the semantically related item, although this delay was attributable to their delay in activating words in general, not to a distinct semantic source. CONCLUSIONS: Children with HAs and children with CIs showed qualitatively similar patterns of real-time spoken word recognition. Findings suggest that developing spoken language via degraded auditory input causes long-term cognitive adaptations to how listeners recognize spoken words, regardless of the type of hearing device used. Delayed lexical access directly led to delays in spreading semantic activation in children with HAs and CIs. This delay in semantic processing may impact these children's ability to understand connected speech in everyday life.


Assuntos
Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Humanos , Criança , Semântica , Tecnologia de Rastreamento Ocular , Percepção da Fala/fisiologia
4.
Ear Hear ; 44(5): 1107-1120, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37144890

RESUMO

OBJECTIVES: Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN: We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS: In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS: These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Humanos , Fala , Individualidade , Ruído , Percepção da Fala/fisiologia
5.
Neuroimage ; 260: 119457, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-35842096

RESUMO

The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec-by-msec basis. This is crucial for understanding the neuroscience of language, and for understanding hearing, language and cognitive disorders in people for whom current behavioral methods are not suitable. We applied machine-learning techniques to standard EEG signals to decode which word was heard on each trial and analyzed the patterns of confusion over time. Results mirrored psycholinguistic findings: Early on, the decoder was equally likely to report the target (e.g., baggage) or a similar sounding competitor (badger), but by around 500 msec, competitors were suppressed. Follow up analyses show that this is robust across EEG systems (gel and saline), with fewer channels, and with fewer trials. Results are robust within individuals and show high reliability. This suggests a powerful and simple paradigm that can assess the neural dynamics of speech decoding, with potential applications for understanding lexical development in a variety of clinical disorders.


Assuntos
Percepção da Fala , Eletroencefalografia , Humanos , Psicolinguística , Reconhecimento Psicológico , Reprodutibilidade dos Testes
6.
Ear Hear ; 43(5): 1487-1501, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35067570

RESUMO

OBJECTIVES: A key challenge in word recognition is the temporary ambiguity created by the fact that speech unfolds over time. In normal hearing (NH) listeners, this temporary ambiguity is resolved through incremental processing and competition among lexical candidates. Post-lingually deafened cochlear implant (CI) users show similar incremental processing and competition but with slight delays. However, even brief delays could lead to drastic changes when compounded across multiple words in a phrase. This study asks whether words presented in non-informative continuous speech (a carrier phrase) are processed differently than in isolation and whether NH listeners and CI users exhibit different effects of a carrier phrase. DESIGN: In a Visual World Paradigm experiment, listeners heard words either in isolation or in non-informative carrier phrases (e.g., "click on the…" ). Listeners selected the picture corresponding to the target word from among four items including the target word (e.g., mustard ), a cohort competitor (e.g., mustache ), a rhyme competitor (e.g., custard ), and an unrelated item (e.g., penguin ). Eye movements were tracked as an index of the relative activation of each lexical candidate as competition unfolds over the course of word recognition. Participants included 21 post-lingually deafened cochlear implant users and 21 NH controls. A replication experiment presented in the Supplemental Digital Content, http://links.lww.com/EANDH/A999 included an additional 22 post-lingually deafened CI users and 18 NH controls. RESULTS: Both CI users and the NH controls were accurate at recognizing the words both in continuous speech and in isolation. The time course of lexical activation (indexed by the fixations) differed substantially between groups. CI users were delayed in fixating the target relative to NH controls. Additionally, CI users showed less competition from cohorts than NH controls (even as previous studies have often report increased competition). However, CI users took longer to suppress the cohort and suppressed it less fully than the NH controls. For both CI users and NH controls, embedding words in carrier phrases led to more immediacy in lexical access as observed by increases in cohort competition relative to when words were presented in isolation. However, CI users were not differentially affected by the carriers. CONCLUSIONS: Unlike prior work, CI users appeared to exhibit "wait-and-see" profile, in which lexical access is delayed minimizing early competition. However, CI users simultaneously sustained competitor activation late in the trial, possibly to preserve flexibility. This hybrid profile has not been observed previously. When target words are heard in continuous speech, both CI users and NH controls more heavily weight early information. However, CI users (but not NH listeners) also commit less fully to the target, potentially keeping options open if they need to recover from a misperception. This mix of patterns reflects a lexical system that is extremely flexible and adapts to fit the needs of a listener.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Audição , Humanos , Fala , Percepção da Fala/fisiologia
7.
J Acoust Soc Am ; 152(6): 3819, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586868

RESUMO

Categorical perception (CP) is likely the single finding from speech perception with the biggest impact on cognitive science. However, within speech perception, it is widely known to be an artifact of task demands. CP is empirically defined as a relationship between phoneme identification and discrimination. As discrimination tasks do not appear to require categorization, this was thought to support the claim that listeners perceive speech solely in terms of linguistic categories. However, 50 years of work using discrimination tasks, priming, the visual world paradigm, and event related potentials has rejected the strongest forms of CP and provided little strong evidence for any form of it. This paper reviews the origins and impact of this scientific meme and the work challenging it. It discusses work showing that the encoding of auditory input is largely continuous, not categorical, and describes the modern theoretical synthesis in which listeners preserve fine-grained detail to enable more flexible processing. This synthesis is fundamentally inconsistent with CP. This leads to a different understanding of how to use and interpret the most basic paradigms in speech perception-phoneme identification along a continuum-and has implications for understanding language and hearing disorders, development, and multilingualism.


Assuntos
Multilinguismo , Percepção da Fala , Idioma , Fala , Potenciais Evocados
8.
J Acoust Soc Am ; 152(6): 3728, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586841

RESUMO

Research on speech categorization and phoneme recognition has relied heavily on tasks in which participants listen to stimuli from a speech continuum and are asked to either classify each stimulus (identification) or discriminate between them (discrimination). Such tasks rest on assumptions about how perception maps onto discrete responses that have not been thoroughly investigated. Here, we identify critical challenges in the link between these tasks and theories of speech categorization. In particular, we show that patterns that have traditionally been linked to categorical perception could arise despite continuous underlying perception and that patterns that run counter to categorical perception could arise despite underlying categorical perception. We describe an alternative measure of speech perception using a visual analog scale that better differentiates between processes at play in speech categorization, and we review some recent findings that show how this task can be used to better inform our theories.


Assuntos
Percepção da Fala , Fala , Humanos , Percepção Auditiva , Percepção da Fala/fisiologia , Reconhecimento Psicológico , Fonética
9.
Neuroimage ; 228: 117699, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33387631

RESUMO

Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.


Assuntos
Atenção/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo , Limiar Auditivo/fisiologia , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Feminino , Humanos , Masculino , Ruído , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Adulto Jovem
10.
Mem Cognit ; 49(5): 984-997, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33733433

RESUMO

It is increasingly understood that people may learn new word/object mappings in part via a form of statistical learning in which they track co-occurrences between words and objects across situations (cross-situational learning). Multiple learning processes contribute to this, thought to reflect the simultaneous influence of real-time hypothesis testing and graduate learning. It is unclear how these processes interact, and if any require explicit cognitive resources. To manipulate the availability of working memory resources for explicit processing, participants completed a dual-task paradigm in which a cross-situational word-learning task was interleaved with a short-term memory task. We then used trial-by-trial analyses to estimate how different learning processes that play out simultaneously are impacted by resource availability. Critically, we found that the effect of hypothesis testing and gradual learning effects showed a small reduction under limited resources, and that the effect of memory load was not fully mediated by these processes. This suggests that neither is purely explicit, and there may be additional resource-dependent processes at play. Consistent with a hybrid account, these findings suggest that these two aspects of learning may reflect different aspects of a single system gated by attention, rather than competing learning systems.


Assuntos
Aprendizagem Verbal , Sinais (Psicologia) , Humanos , Memória de Curto Prazo , Probabilidade
11.
J Acoust Soc Am ; 150(3): 2131, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34598595

RESUMO

Speech perception (especially in background noise) is a critical problem for hearing-impaired listeners and an important issue for cognitive hearing science. Despite a plethora of standardized measures, few single-word closed-set tests uniformly sample the most frequently used phonemes and use response choices that equally sample phonetic features like place and voicing. The Iowa Test of Consonant Perception (ITCP) attempts to solve this. It is a proportionally balanced phonemic word recognition task designed to assess perception of the initial consonant of monosyllabic consonant-vowel-consonant (CVC) words. The ITCP consists of 120 sampled CVC words. Words were recorded from four different talkers (two female) and uniformly sampled from all four quadrants of the vowel space to control for coarticulation. Response choices on each trial are balanced to equate difficulty and sample a single phonetic feature. This study evaluated the psychometric properties of ITCP by examining reliability (test-retest) and validity in a sample of online normal-hearing participants. Ninety-eight participants completed two sessions of the ITCP along with standardized tests of words and sentence in noise (CNC words and AzBio sentences). The ITCP showed good test-retest reliability and convergent validity with two popular tests presented in noise. All the materials to use the ITCP or to construct your own version of the ITCP are freely available [Geller, McMurray, Holmes, and Choi (2020). https://osf.io/hycdu/].


Assuntos
Percepção da Fala , Feminino , Humanos , Iowa , Ruído/efeitos adversos , Fonética , Reprodutibilidade dos Testes
12.
J Exp Child Psychol ; 189: 104705, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31634736

RESUMO

Young children are surprisingly good word learners. Despite their relative lack of world knowledge and limited vocabularies, they consistently map novel words to novel referents and, at later ages, show retention of these new word-referent pairs. Prior work has implicated the use of mutual exclusivity constraints and novelty biases, which require that children use knowledge of well-known words to disambiguate uncertain naming situations. The current study, however, presents evidence that weaker vocabulary knowledge during the initial exposure to a new word may be better for retention of new mappings. Children aged 18-24 months selected referents for novel words in the context of foil stimuli that varied in their lexical strength and novelty: well-known items (e.g., shoe), just-learned weakly known items (e.g., wif), and completely novel items. Referent selection performance was significantly reduced on trials with weakly known foil items. Surprisingly, however, children subsequently showed above-chance retention for novel words mapped in the context of weakly known competitors compared with those mapped with strongly known competitors or with completely novel competitors. We discuss implications for our understanding of word learning constraints and how children use known words and novelty during word learning.


Assuntos
Rememoração Mental , Aprendizagem Verbal , Vocabulário , Feminino , Humanos , Lactente , Conhecimento , Masculino
13.
J Exp Child Psychol ; 191: 104731, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31786367

RESUMO

An important component of learning to read is the acquisition of letter-to-sound mappings. The sheer quantity of mappings and many exceptions in opaque languages such as English suggests that children may use a form of statistical learning to acquire them. However, whereas statistical models of reading are item-based, reading instruction typically focuses on rule-based approaches involving small sets of regularities. This discrepancy poses the question of how different groupings of regularities, an unexamined factor of most reading curricula, may affect learning. Exploring the interplay between item statistics and rules, this study investigated how consonant variability, an item-level factor, and the degree of overlap among the to-be-trained vowel strings, a group-level factor, influence learning. English-speaking first graders (N = 361) were randomly assigned to be trained on vowel sets with high overlap (e.g., EA, AI) or low overlap (e.g., EE, AI); this was crossed with a manipulation of consonant frame variability. Whereas high vowel overlap led to poorer initial performance, it resulted in more learning when tested immediately and after a 2-week-delay. There was little beneficial effect of consonant variability. These findings indicate that online letter/sound processing affects how new knowledge is integrated into existing information. Moreover, they suggest that vowel overlap should be considered when designing reading curricula.


Assuntos
Prática Psicológica , Leitura , Retenção Psicológica/fisiologia , Criança , Feminino , Humanos , Masculino , Aprendizagem por Probabilidade , Distribuição Aleatória
15.
Dev Psychobiol ; 62(6): 697-710, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32037557

RESUMO

During the perinatal period in mammals when active sleep predominates, skeletal muscles twitch throughout the body. We have hypothesized that myoclonic twitches provide unique insight into the functional status of the human infant's nervous system. However, assessments of the rate and patterning of twitching have largely been restricted to infant rodents. Thus, here we analyze twitching in human infants over the first seven postnatal months. Using videography and behavioral measures of twitching during bouts of daytime sleep, we find at all ages that twitching across the body occurs predominantly in bursts at intervals of 10 s or less. We also find that twitching is expressed differentially across the body and with age. For example, twitching of the face and head is most prevalent shortly after birth and decreases over the first several months. In addition, twitching of the hands and feet occurs at a consistently higher rate than does twitching elsewhere in the body. Finally, the patterning of twitching becomes more structured with age, with twitches of the left and right hands and feet exhibiting the strongest coupling. Altogether, these findings support the notion that twitches can provide a unique source of information about typical and atypical sensorimotor development.


Assuntos
Desenvolvimento Infantil/fisiologia , Músculo Esquelético/fisiologia , Sono/fisiologia , Espasmo/fisiopatologia , Animais , Feminino , Humanos , Lactente , Recém-Nascido , Masculino , Sono REM/fisiologia , Análise Espaço-Temporal , Gravação em Vídeo
16.
Ear Hear ; 40(4): 961-980, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30531260

RESUMO

OBJECTIVES: Work in normal-hearing (NH) adults suggests that spoken language processing involves coping with ambiguity. Even a clearly spoken word contains brief periods of ambiguity as it unfolds over time, and early portions will not be sufficient to uniquely identify the word. However, beyond this temporary ambiguity, NH listeners must also cope with the loss of information due to reduced forms, dialect, and other factors. A recent study suggests that NH listeners may adapt to increased ambiguity by changing the dynamics of how they commit to candidates at a lexical level. Cochlear implant (CI) users must also frequently deal with highly degraded input, in which there is less information available in the input to recover a target word. The authors asked here whether their frequent experience with this leads to lexical dynamics that are better suited for coping with uncertainty. DESIGN: Listeners heard words either correctly pronounced (dog) or mispronounced at onset (gog) or offset (dob). Listeners selected the corresponding picture from a screen containing pictures of the target and three unrelated items. While they did this, fixations to each object were tracked as a measure of the time course of identifying the target. The authors tested 44 postlingually deafened adult CI users in 2 groups (23 used standard electric only configurations, and 21 supplemented the CI with a hearing aid), along with 28 age-matched age-typical hearing (ATH) controls. RESULTS: All three groups recognized the target word accurately, though each showed a small decrement for mispronounced forms (larger in both types of CI users). Analysis of fixations showed a close time locking to the timing of the mispronunciation. Onset mispronunciations delayed initial fixations to the target, but fixations to the target showed partial recovery by the end of the trial. Offset mispronunciations showed no effect early, but suppressed looking later. This pattern was attested in all three groups, though both types of CI users were slower and did not commit fully to the target. When the authors quantified the degree of disruption (by the mispronounced forms), they found that both groups of CI users showed less disruption than ATH listeners during the first 900 msec of processing. Finally, an individual differences analysis showed that within the CI users, the dynamics of fixations predicted speech perception outcomes over and above accuracy in this task and that CI users with the more rapid fixation patterns of ATH listeners showed better outcomes. CONCLUSIONS: Postlingually deafened CI users process speech incrementally (as do ATH listeners), though they commit more slowly and less strongly to a single item than do ATH listeners. This may allow them to cope more flexible with mispronunciations.


Assuntos
Implante Coclear , Auxiliares de Audição , Perda Auditiva Neurossensorial/reabilitação , Percepção da Fala/fisiologia , Incerteza , Estimulação Acústica , Adulto , Estudos de Casos e Controles , Medições dos Movimentos Oculares , Feminino , Perda Auditiva Neurossensorial/fisiopatologia , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa
17.
J Exp Child Psychol ; 175: 17-36, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-29979958

RESUMO

Considerable debate in language acquisition concerns whether word learning is driven by domain-general (symbolically flexible) or domain-specific learning mechanisms. Prior work has shown that very young children can map objects to either words or nonlinguistic sounds, but by 20 months of age this ability narrows to only words. This suggests that although symbolically flexible mechanisms are operative early, they become more specified over development. However, such research has been conducted only with young children in ostensive teaching contexts. Thus, we investigated symbolic flexibility at later ages in more referentially ambiguous learning situations. In Experiment 1, 47 6- to 8-year-olds acquired eight symbol-object mappings in a cross-situational word learning paradigm where multiple mappings are learned based only on co-occurrence. In the word condition participants learned with novel pseudowords, whereas in the sound condition participants learned with nonlinguistic sounds (e.g., beeps). Children acquired the mappings, but performance did not differ across conditions, suggesting broad symbolic flexibility. In Experiment 2, 41 adults learned 16 mappings in a comparable design. They learned with ease in both conditions but showed a significant advantage for words. Thus, symbolic flexibility decreases with age, potentially due to repeated experiences with linguistic materials. Moreover, trial-by-trial analyses of the microstructure of both children's and adults' performance did not reveal any substantial differences due to condition, consistent with the hypothesis that learning mechanisms are generally employed similarly with both words and nonlinguistic sounds.


Assuntos
Desenvolvimento da Linguagem , Aprendizagem Verbal , Estimulação Acústica , Adulto , Criança , Feminino , Humanos , Masculino , Som , Adulto Jovem
18.
Psychol Sci ; 27(1): 43-52, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26581947

RESUMO

Acoustic cues are short-lived and highly variable, which makes speech perception a difficult problem. However, most listeners solve this problem effortlessly. In the present experiment, we demonstrated that part of the solution lies in predicting upcoming speech sounds and that predictions are modulated by high-level expectations about the current sound. Participants heard isolated fricatives (e.g., "s," "sh") and predicted the upcoming vowel. Accuracy was above chance, which suggests that fine-grained detail in the signal can be used for prediction. A second group performed the same task but also saw a still face and a letter corresponding to the fricative. This group performed markedly better, which suggests that high-level knowledge modulates prediction by helping listeners form expectations about what the fricative should have sounded like. This suggests a form of data explanation operating in speech perception: Listeners account for variance due to their knowledge of the talker and current phoneme, and they use what is left over to make more accurate predictions about the next sound.


Assuntos
Fonética , Acústica da Fala , Percepção da Fala , Sinais (Psicologia) , Feminino , Humanos , Idioma , Masculino
19.
Ear Hear ; 37(1): e37-51, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26317298

RESUMO

OBJECTIVES: While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/∫, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN: Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/∫ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ∫-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS: Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION: Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.


Assuntos
Adaptação Fisiológica , Implantes Cocleares , Surdez/fisiopatologia , Percepção da Fala , Incerteza , Adulto , Idoso , Estudos de Casos e Controles , Implante Coclear , Surdez/reabilitação , Medições dos Movimentos Oculares , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Adulto Jovem
20.
Neuroimage ; 101: 598-609, 2014 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-25019680

RESUMO

The model for functional organization of human auditory cortex is in part based on findings in non-human primates, where the auditory cortex is hierarchically delineated into core, belt and parabelt fields. This model envisions that core cortex directly projects to belt, but not to parabelt, whereas belt regions are a major source of direct input for auditory parabelt. In humans, the posteromedial portion of Heschl's gyrus (HG) represents core auditory cortex, whereas the anterolateral portion of HG and the posterolateral superior temporal gyrus (PLST) are generally interpreted as belt and parabelt, respectively. In this scheme, response latencies can be hypothesized to progress in serial fashion from posteromedial to anterolateral HG to PLST. We examined this hypothesis by comparing response latencies to multiple stimuli, measured across these regions using simultaneous intracranial recordings in neurosurgical patients. Stimuli were 100 Hz click trains and the speech syllable /da/. Response latencies were determined by examining event-related band power in the high gamma frequency range. The earliest responses in auditory cortex occurred in posteromedial HG. Responses elicited from sites in anterolateral HG were neither earlier in latency from sites on PLST, nor more robust. Anterolateral HG and PLST exhibited some preference for speech syllable stimuli compared to click trains. These findings are not supportive of a strict serial model envisioning principal flow of information along HG to PLST. In contrast, data suggest that a portion of PLST may represent a relatively early stage in the auditory cortical hierarchy.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Eletroencefalografia/métodos , Potenciais Evocados Auditivos/fisiologia , Ritmo Gama/fisiologia , Adulto , Córtex Auditivo/anatomia & histologia , Eletrodos Implantados , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA