Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 39
Filter
Add more filters










Publication year range
1.
J Speech Lang Hear Res ; 67(2): 595-605, 2024 Feb 12.
Article in English | MEDLINE | ID: mdl-38266225

ABSTRACT

PURPOSE: Numerous tasks have been developed to measure receptive vocabulary, many of which were designed to be administered in person with a trained researcher or clinician. The purpose of the current study is to compare a common, in-person test of vocabulary with other vocabulary assessments that can be self-administered. METHOD: Fifty-three participants completed the Peabody Picture Vocabulary Test (PPVT) via online video call to mimic in-person administration, as well as four additional fully automated, self-administered measures of receptive vocabulary. Participants also completed three control tasks that do not measure receptive vocabulary. RESULTS: Pearson correlations indicated moderate correlations among most of the receptive vocabulary measures (approximately r = .50-.70). As expected, the control tasks revealed only weak correlations to the vocabulary measures. However, subsets of items of the four self-administered measures of receptive vocabulary achieved high correlations with the PPVT (r > .80). These subsets were found through a repeated resampling approach. CONCLUSIONS: Measures of receptive vocabulary differ in which items are included and in the assessment task (e.g., lexical decision, picture matching, synonym matching). The results of the current study suggest that several self-administered tasks are able to achieve high correlations with the PPVT when a subset of items are scored, rather than the full set of items. These data provide evidence that subsets of items on one behavioral assessment can more highly correlate to another measure. In practical terms, these data demonstrate that self-administered, automated measures of receptive vocabulary can be used as reasonable substitutes of at least one test (PPVT) that requires human interaction. That several of the fully automated measures resulted in high correlations with the PPVT suggests that different tasks could be selected depending on the needs of the researcher. It is important to note the aim was not to establish clinical relevance of these measures, but establish whether researchers could use an experimental task of receptive vocabulary that probes a similar construct to what is captured by the PPVT, and use these measures of individual differences.


Subject(s)
Vocabulary , Humans , Intelligence Tests
2.
Elife ; 122024 Jan 24.
Article in English | MEDLINE | ID: mdl-38265440

ABSTRACT

Learning to perform a perceptual decision task is generally achieved through sessions of effortful practice with feedback. Here, we investigated how passive exposure to task-relevant stimuli, which is relatively effortless and does not require feedback, influences active learning. First, we trained mice in a sound-categorization task with various schedules combining passive exposure and active training. Mice that received passive exposure exhibited faster learning, regardless of whether this exposure occurred entirely before active training or was interleaved between active sessions. We next trained neural-network models with different architectures and learning rules to perform the task. Networks that use the statistical properties of stimuli to enhance separability of the data via unsupervised learning during passive exposure provided the best account of the behavioral observations. We further found that, during interleaved schedules, there is an increased alignment between weight updates from passive exposure and active training, such that a few interleaved sessions can be as effective as schedules with long periods of passive exposure before active training, consistent with our behavioral observations. These results provide key insights for the design of efficient training schedules that combine active learning and passive exposure in both natural and artificial systems.


Subject(s)
Behavior Observation Techniques , Neural Networks, Computer , Animals , Mice , Sound
3.
Lang Speech ; 67(1): 40-71, 2024 Mar.
Article in English | MEDLINE | ID: mdl-36967604

ABSTRACT

Previous research has shown that native listeners benefit from clearly produced speech, as well as from predictable semantic context when these enhancements are delivered in native speech. However, it is unclear whether native listeners benefit from acoustic and semantic enhancements differently when listening to other varieties of speech, including non-native speech. The current study examines to what extent native English listeners benefit from acoustic and semantic cues present in native and non-native English speech. Native English listeners transcribed sentence final words that were of different levels of semantic predictability, produced in plain- or clear-speaking styles by Native English talkers and by native Mandarin talkers of higher- and lower-proficiency in English. The perception results demonstrated that listeners benefited from semantic cues in higher- and lower-proficiency talkers' speech (i.e., transcribed speech more accurately), but not from acoustic cues, even though higher-proficiency talkers did make substantial acoustic enhancements from plain to clear speech. The current results suggest that native listeners benefit more robustly from semantic cues than from acoustic cues when those cues are embedded in non-native speech.


Subject(s)
Semantics , Speech Perception , Humans , Speech , Noise , Phonetics , Acoustics , Speech Intelligibility
4.
Hear Res ; 441: 108920, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38029503

ABSTRACT

A better understanding of the neural mechanisms of speech processing can have a major impact in the development of strategies for language learning and in addressing disorders that affect speech comprehension. Technical limitations in research with human subjects hinder a comprehensive exploration of these processes, making animal models essential for advancing the characterization of how neural circuits make speech perception possible. Here, we investigated the mouse as a model organism for studying speech processing and explored whether distinct regions of the mouse auditory cortex are sensitive to specific acoustic features of speech. We found that mice can learn to categorize frequency-shifted human speech sounds based on differences in formant transitions (FT) and voice onset time (VOT). Moreover, neurons across various auditory cortical regions were selective to these speech features, with a higher proportion of speech-selective neurons in the dorso-posterior region. Last, many of these neurons displayed mixed-selectivity for both features, an attribute that was most common in dorsal regions of the auditory cortex. Our results demonstrate that the mouse serves as a valuable model for studying the detailed mechanisms of speech feature encoding and neural plasticity during speech-sound learning.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Mice , Animals , Auditory Cortex/physiology , Speech , Acoustic Stimulation/methods , Speech Perception/physiology , Acoustics , Auditory Perception/physiology
5.
bioRxiv ; 2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37790479

ABSTRACT

A better understanding of the neural mechanisms of speech processing can have a major impact in the development of strategies for language learning and in addressing disorders that affect speech comprehension. Technical limitations in research with human subjects hinder a comprehensive exploration of these processes, making animal models essential for advancing the characterization of how neural circuits make speech perception possible. Here, we investigated the mouse as a model organism for studying speech processing and explored whether distinct regions of the mouse auditory cortex are sensitive to specific acoustic features of speech. We found that mice can learn to categorize frequency-shifted human speech sounds based on differences in formant transitions (FT) and voice onset time (VOT). Moreover, neurons across various auditory cortical regions were selective to these speech features, with a higher proportion of speech-selective neurons in the dorso-posterior region. Last, many of these neurons displayed mixed-selectivity for both features, an attribute that was most common in dorsal regions of the auditory cortex. Our results demonstrate that the mouse serves as a valuable model for studying the detailed mechanisms of speech feature encoding and neural plasticity during speech-sound learning.

6.
J Cogn ; 6(1): 26, 2023.
Article in English | MEDLINE | ID: mdl-37213437

ABSTRACT

Models of lemma access in language production predict occasional mis-selection of lemmas linked to highly similar concepts (synonyms) and concepts standing in a set-superset relation (subsumatives). It is unclear, however, if such errors occur in spontaneous speech, and if they do, whether humans can detect them given their minimal impact on sentence meaning. This data report examines a large corpus of English spontaneous speech errors and documents a low but non-negligible occurrence of these categories. The existence of synonym and subsumative errors is documented in a larger open access data set that supports a range of new investigations of the semantic structure of lexical substitution and word blend speech errors.

7.
bioRxiv ; 2023 Oct 05.
Article in English | MEDLINE | ID: mdl-37066276

ABSTRACT

Learning to perform a perceptual decision task is generally achieved through sessions of effortful practice with feedback. Here, we investigated how passive exposure to task-relevant stimuli, which is relatively effortless and does not require feedback, influences active learning. First, we trained mice in a sound-categorization task with various schedules combining passive exposure and active training. Mice that received passive exposure exhibited faster learning, regardless of whether this exposure occurred entirely before active training or was interleaved between active sessions. We next trained neural-network models with different architectures and learning rules to perform the task. Networks that use the statistical properties of stimuli to enhance separability of the data via unsupervised learning during passive exposure provided the best account of the behavioral observations. We further found that, during interleaved schedules, there is an increased alignment between weight updates from passive exposure and active training, such that a few interleaved sessions can be as effective as schedules with long periods of passive exposure before active training, consistent with our behavioral observations. These results provide key insights for the design of efficient training schedules that combine active learning and passive exposure in both natural and artificial systems.

8.
J Acoust Soc Am ; 153(1): 68, 2023 01.
Article in English | MEDLINE | ID: mdl-36732227

ABSTRACT

Intelligibility measures, which assess the number of words or phonemes a listener correctly transcribes or repeats, are commonly used metrics for speech perception research. While these measures have many benefits for researchers, they also come with a number of limitations. By pointing out the strengths and limitations of this approach, including how it fails to capture aspects of perception such as listening effort, this article argues that the role of intelligibility measures must be reconsidered in fields such as linguistics, communication disorders, and psychology. Recommendations for future work in this area are presented.


Subject(s)
Speech Perception , Speech Intelligibility , Linguistics , Cognition
9.
J Acoust Soc Am ; 152(5): 3025, 2022 11.
Article in English | MEDLINE | ID: mdl-36456300

ABSTRACT

Most current theories and models of second language speech perception are grounded in the notion that learners acquire speech sound categories in their target language. In this paper, this classic idea in speech perception is revisited, given that clear evidence for formation of such categories is lacking in previous research. To understand the debate on the nature of speech sound representations in a second language, an operational definition of "category" is presented, and the issues of categorical perception and current theories of second language learning are reviewed. Following this, behavioral and neuroimaging evidence for and against acquisition of categorical representations is described. Finally, recommendations for future work are discussed. The paper concludes with a recommendation for integration of behavioral and neuroimaging work and theory in this area.


Subject(s)
Phonetics , Speech Perception , Language
10.
J Acoust Soc Am ; 151(5): 3496, 2022 05.
Article in English | MEDLINE | ID: mdl-35649935

ABSTRACT

Noise in healthcare settings, such as hospitals, often exceeds levels recommended by health organizations. Although researchers and medical professionals have raised concerns about the effect of these noise levels on spoken communication, objective measures of behavioral intelligibility in hospital noise are lacking. Further, no studies of intelligibility in hospital noise used medically relevant terminology, which may differentially impact intelligibility compared to standard terminology in speech perception research and is essential for ensuring ecological validity. Here, intelligibility was measured using online testing for 69 young adult listeners in three listening conditions (i.e., quiet, speech-shaped noise, and hospital noise: 23 listeners per condition) for four sentence types. Three sentence types included medical terminology with varied lexical frequency and familiarity characteristics. A final sentence set included non-medically related sentences. Results showed that intelligibility was negatively impacted by both noise types with no significant difference between the hospital and speech-shaped noise. Medically related sentences were not less intelligible overall, but word recognition accuracy was significantly positively correlated with both lexical frequency and familiarity. These results support the need for continued research on how noise levels in healthcare settings in concert with less familiar medical terminology impact communications and ultimately health outcomes.


Subject(s)
Speech Intelligibility , Speech Perception , Hospitals , Humans , Language , Noise/adverse effects , Young Adult
11.
Atten Percept Psychophys ; 84(3): 960-980, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35277847

ABSTRACT

Speech perception and production are critical skills when acquiring a new language. However, the nature of the relationship between these two processes is unclear, particularly for non-native speech sound contrasts. Although it has been assumed that perception and production are supportive, recent evidence has demonstrated that, under some circumstances, production can disrupt perceptual learning. Specifically, producing the to-be-learned contrast on each trial can disrupt perceptual learning of that contrast. Here, we treat speech perception and speech production as separate tasks. From this perspective, perceptual learning studies that include a production component on each trial create a task switch. We report two experiments that test how task switching can disrupt perceptual learning. One experiment demonstrates that the disruption caused by switching to production is sensitive to time delays: Increasing the delay between perception and production on a trial can reduce and even eliminate disruption of perceptual learning. The second experiment shows that if a task other than producing the to-be-learned contrast is imposed, the task-switching component of disruption is not influenced by a delay. These experiments provide a new understanding of the relationship between speech perception and speech production, and clarify conditions under which the two cooperate or compete.


Subject(s)
Phonetics , Speech Perception , Humans , Language , Learning , Speech
12.
J Acoust Soc Am ; 151(2): 1246, 2022 02.
Article in English | MEDLINE | ID: mdl-35232086

ABSTRACT

Native talkers are able to enhance acoustic characteristics of their speech in a speaking style known as "clear speech," which is better understood by listeners than "plain speech." However, despite substantial research in the area of clear speech, it is less clear whether non-native talkers of various proficiency levels are able to adopt a clear speaking style and if so, whether this style has perceptual benefits for native listeners. In the present study, native English listeners evaluated plain and clear speech produced by three groups: native English talkers, non-native talkers with lower proficiency, and non-native talkers with higher proficiency. Listeners completed a transcription task (i.e., an objective measure of the speech intelligibility). We investigated intelligibility as a function of language background and proficiency and also investigated the acoustic modifications that are associated with these perceptual benefits. The results of the study suggest that both native and non-native talkers modulate their speech when asked to adopt a clear speaking style, but that the size of the acoustic modifications, as well as consequences of this speaking style for perception differ as a function of language background and language proficiency.


Subject(s)
Speech Perception , Speech , Acoustics , Cognition , Language , Speech Intelligibility
13.
Lang Speech ; 65(2): 418-443, 2022 Jun.
Article in English | MEDLINE | ID: mdl-34240630

ABSTRACT

To investigate the role of spectral pattern information in the perception of foreign-accented speech, we measured the effects of spectral shifts on judgments of talker discrimination, perceived naturalness, and intelligibility when listening to Mandarin-accented English and native-accented English sentences. In separate conditions, the spectral envelope and fundamental frequency (F0) contours were shifted up or down in three steps using coordinated scale factors (multiples of 8% and 30%, respectively). Experiment 1 showed that listeners perceive spectrally shifted sentences as coming from a different talker for both native-accented and foreign-accented speech. Experiment 2 demonstrated that downward shifts applied to male talkers and the largest upward shifts applied to all talkers reduced the perceived naturalness, regardless of accent. Overall, listeners rated foreign-accented speech as sounding less natural even for unshifted speech. In Experiment 3, introducing spectral shifts further lowered the intelligibility of foreign-accented speech. When speech from the same foreign-accented talker was shifted to simulate five different talkers, increased exposure failed to produce an improvement in intelligibility scores, similar to the pattern observed when listeners actually heard five foreign-accented talkers. Intelligibility of spectrally shifted native-accented speech was near ceiling performance initially, and no further improvement or decrement was observed. These experiments suggest a mechanism that utilizes spectral envelope and F0 cues in a talker-dependent manner to support the perception of foreign-accented speech.


Subject(s)
Speech Perception , Speech , Cognition , Humans , Judgment , Language , Male , Speech Intelligibility
14.
Front Psychol ; 12: 661415, 2021.
Article in English | MEDLINE | ID: mdl-34220634

ABSTRACT

When talkers anticipate that a listener may have difficulty understanding their speech, they adopt a speaking style typically described as "clear speech." This speaking style includes a variety of acoustic modifications and has perceptual benefits for listeners. In the present study, we examine whether clear speaking styles also include modulation of lexical items selected and produced during naturalistic conversations. Our results demonstrate that talkers do, indeed, modulate their lexical selection, as measured by a variety of lexical diversity and lexical sophistication indices. Further, the results demonstrate that clear speech is not a monolithic construct. Talkers modulate their speech differently depending on the communication situation. We suggest that clear speech should be conceptualized as a set of speaking styles, in which talkers take the listener and communication situation into consideration.

15.
Cognition ; 210: 104577, 2021 05.
Article in English | MEDLINE | ID: mdl-33609911

ABSTRACT

Speaking involves both retrieving the sounds of a word (phonological planning) and realizing these selected sounds in fluid speech (articulation). Recent phonetic research on speech errors has argued that multiple candidate sounds in phonological planning can influence articulation because the pronunciation of mis-selected error sounds is slightly skewed towards unselected target sounds. Yet research to date has only examined these phonetic distortions in experimentally-elicited errors, leaving doubt as to whether they reflect tendencies in spontaneous speech. Here, we analyzed the pronunciation of speech errors of English-speaking adults in natural conversations relative to matched correct words by the same speakers, and found the conjectured phonetic distortions. Comparison of these data with a larger set of experimentally-elicited errors failed to reveal significant differences between the two types of errors. These findings provide ecologically-valid data supporting models that allow for information about multiple planning representations to simultaneously influence speech articulation.


Subject(s)
Phonetics , Speech , Adult , Communication , Humans , Language , Speech Production Measurement
16.
JASA Express Lett ; 1(1): 015207, 2021 Jan.
Article in English | MEDLINE | ID: mdl-36154082

ABSTRACT

Listeners improve their ability to understand nonnative speech through exposure. The present study examines the role of semantic predictability during adaptation. Listeners were trained on high-predictability, low-predictability, or semantically anomalous sentences. Results demonstrate that trained participants improve their perception of nonnative speech compared to untrained participants. Adaptation is most robust for the types of sentences participants heard during training; however, semantic predictability during exposure did not impact the amount of adaptation overall. Results show advantages in adaptation specific to the type of speech material, a finding similar to the specificity of adaptation previously demonstrated for individual talkers or accents.

17.
JASA Express Lett ; 1(10): 105201, 2021 10.
Article in English | MEDLINE | ID: mdl-36154214

ABSTRACT

Listeners often have difficulty understanding unfamiliar speech (e.g., non-native speech), but they are able to adapt to or improve their ability to understand unfamiliar speech. However, it is unclear whether non-native listeners demonstrate adaptation to novel native English speech broadly with relatively limited exposure. Thus, this study examines non-native English listeners' adaptation to native English speakers and whether talker variability affects adaptation. Results suggest that while greater variability initially disrupts non-native English listeners' perception of native English speakers, listeners are able to rapidly adapt to novel speakers and exposure to greater variability could result in cross-talker generalization.


Subject(s)
Multilingualism , Speech Perception , Acclimatization , Language , Speech
18.
J Acoust Soc Am ; 148(3): EL267, 2020 09.
Article in English | MEDLINE | ID: mdl-33003859

ABSTRACT

To examine difficulties experienced by cochlear implant (CI) users when perceiving non-native speech, intelligibility of non-native speech was compared in conditions with single and multiple alternating talkers. Compared to listeners with normal hearing, no rapid talker-dependent adaptation was observed and performance was approximately 40% lower for CI users following increased exposure in both talker conditions. Results suggest that lower performance for CI users may stem from combined effects of limited spectral resolution, which diminishes perceptible differences across accents, and limited access to talker-specific acoustic features of speech, which reduces the ability to adapt to non-native speech in a talker-dependent manner.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Cognition , Speech
19.
J Acoust Soc Am ; 147(6): EL511, 2020 06.
Article in English | MEDLINE | ID: mdl-32611136

ABSTRACT

Loanword adaptation exhibits a bias favoring sound cue preservation, possibly due to a conservative caution against deleting cues of unsure expendability in a foreign language. This study tests whether listeners are biased to preserve an acoustically ambiguous sound cue in a nonce word framed as originating from a foreign language. Results show the opposite: Listeners are less likely to transcribe an ambiguous sound cue as a phonological segment when the word containing it is framed as a loanword. However, listeners who identify as more open and accommodating to foreign people and languages show relatively more preservation in the loanword condition.


Subject(s)
Cues , Speech Perception , Humans , Language , Phonetics , Speech Acoustics
20.
J Acoust Soc Am ; 147(5): 3702, 2020 05.
Article in English | MEDLINE | ID: mdl-32486786

ABSTRACT

Clear speech is a style that speakers adopt when talking with listeners whom these speakers anticipate may have a problem understanding speech. This study examines whether native English speakers use clear speech in conversations with non-native English speakers when native speakers are not explicitly asked to use clear speech (i.e., clear speech elicited with naturalistic methods). The results of the study suggest that native English speakers use clear speech in conversations with non-native English speakers even when native speakers are not explicitly asked to. Native English speakers' speech is more intelligible in the early portions of the conversations than in the late portions of each conversation. Further, the speakers "reset" to clearer speech at the start of each Diapix picture. Additionally, acoustic properties of the speech are examined to complement the intelligibility results. These findings suggest the instigation of clear speech may be listener-driven but the maintenance of clear speech is likely more speaker-driven.


Subject(s)
Speech Intelligibility , Speech Perception , Language , Speech , Speech Production Measurement
SELECTION OF CITATIONS
SEARCH DETAIL
...