Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
J Exp Psychol Gen ; 153(4): 957-981, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38095981

ABSTRACT

Poor performance on phonological tasks is characteristic of neurodevelopmental language disorders (dyslexia and/or developmental language disorder). Perceptual deficit accounts attribute phonological dysfunction to lower-level deficits in speech-sound processing. However, a causal pathway from speech perception to phonological performance has not been established. We assessed this relationship in typical adults by experimentally disrupting speech-sound discrimination in a phonological short-term memory (pSTM) task. We used an automated audio-morphing method (Rogers & Davis, 2017) to create ambiguous intermediate syllables between 16 letter name-letter name ("B"-"P") and letter name-word ("B"-"we") pairs. High- and low-ambiguity syllables were used in a pSTM task in which participants (N = 36) recalled six- and eight-letter name sequences. Low-ambiguity sequences were better recalled than high-ambiguity sequences, for letter name-letter name but not letter name-word morphed syllables. A further experiment replicated this ambiguity cost (N = 26), but failed to show retroactive or prospective effects for mixed high- and low-ambiguity sequences, in contrast to pSTM findings for speech-in-noise (SiN; Guang et al., 2020; Rabbitt, 1968). These experiments show that ambiguous speech sounds impair pSTM, via a different mechanism to SiN recall. We further show that the effect of ambiguous speech on recall is context-specific, limited, and does not transfer to recall of nonconfusable items. This indicates that speech perception deficits are not a plausible cause of pSTM difficulties in language disorders. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Dyslexia , Language Disorders , Speech Perception , Adult , Humans , Speech , Memory, Short-Term , Phonetics , Articulation Disorders
3.
PLoS One ; 18(1): e0279024, 2023.
Article in English | MEDLINE | ID: mdl-36634109

ABSTRACT

Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.


Subject(s)
Speech Intelligibility , Speech Perception , Humans , Phonetics , Acoustics , Cognition , Acoustic Stimulation , Auditory Perception
4.
Behav Res Methods ; 55(4): 1863-1873, 2023 Jun.
Article in English | MEDLINE | ID: mdl-35768741

ABSTRACT

Testing that an experiment works as intended is critical for identifying design problems and catching technical errors that could invalidate the results. Testing is also time-consuming because of the need to manually run the experiment. This makes testing the experiment costly for researchers, and therefore testing is less comprehensive than in other kinds of software development where tools to automate and speed up the testing process are widely used. In this paper, we describe an approach that substantially reduces the time required to test behavioral experiments: automated simulation of participant behavior. We describe how software that is used to build experiments can use information contained in the experiment's code to automatically generate plausible participant behavior. We demonstrate this through an implementation using jsPsych. We then describe four potential scenarios where automated simulation of participant behavior can improve the way researchers build experiments. Each scenario includes a demo and accompanying code. The full set of examples can be found at https://jspsych.github.io/simulation-examples/ .


Subject(s)
Software , Humans , Computer Simulation
5.
J Cogn ; 5(1): 4, 2022.
Article in English | MEDLINE | ID: mdl-36072113

ABSTRACT

Words with multiple meanings (e.g. bark of the tree/dog) have provided important insights into several key topics within psycholinguistics. Experiments that use ambiguous words require stimuli to be carefully controlled for the relative frequency (dominance) of their different meanings, as this property has pervasive effects on numerous tasks. Dominance scores are often calculated from word association responses: by measuring the proportion of participants who respond to the word 'bark' with dog-related (e.g. "woof") or tree-related (e.g. "branch") responses, researchers can estimate people's relative preferences for these meanings. We collated data from a number of recent experiments and pre-tests to construct a dataset of 29,542 valid responses for 243 spoken ambiguous words from participants from the United Kingdom. We provide summary dominance data for the 182 ambiguous words that have a minimum of 100 responses, and a tool for automatically coding new word association responses based on responses in our coded set, which allows additional data to be more easily scored and added to this database. All files can be found at: https://osf.io/uy47w/.

6.
Neurobiol Lang (Camb) ; 3(4): 665-698, 2022.
Article in English | MEDLINE | ID: mdl-36742011

ABSTRACT

Listening to spoken language engages domain-general multiple demand (MD; frontoparietal) regions of the human brain, in addition to domain-selective (frontotemporal) language regions, particularly when comprehension is challenging. However, there is limited evidence that the MD network makes a functional contribution to core aspects of understanding language. In a behavioural study of volunteers (n = 19) with chronic brain lesions, but without aphasia, we assessed the causal role of these networks in perceiving, comprehending, and adapting to spoken sentences made more challenging by acoustic-degradation or lexico-semantic ambiguity. We measured perception of and adaptation to acoustically degraded (noise-vocoded) sentences with a word report task before and after training. Participants with greater damage to MD but not language regions required more vocoder channels to achieve 50% word report, indicating impaired perception. Perception improved following training, reflecting adaptation to acoustic degradation, but adaptation was unrelated to lesion location or extent. Comprehension of spoken sentences with semantically ambiguous words was measured with a sentence coherence judgement task. Accuracy was high and unaffected by lesion location or extent. Adaptation to semantic ambiguity was measured in a subsequent word association task, which showed that availability of lower-frequency meanings of ambiguous words increased following their comprehension (word-meaning priming). Word-meaning priming was reduced for participants with greater damage to language but not MD regions. Language and MD networks make dissociable contributions to challenging speech comprehension: Using recent experience to update word meaning preferences depends on language-selective regions, whereas the domain-general MD network plays a causal role in reporting words from degraded speech.

7.
J Neurosci ; 41(32): 6919-6932, 2021 08 11.
Article in English | MEDLINE | ID: mdl-34210777

ABSTRACT

Human listeners achieve quick and effortless speech comprehension through computations of conditional probability using Bayes rule. However, the neural implementation of Bayesian perceptual inference remains unclear. Competitive-selection accounts (e.g., TRACE) propose that word recognition is achieved through direct inhibitory connections between units representing candidate words that share segments (e.g., hygiene and hijack share /haidʒ/). Manipulations that increase lexical uncertainty should increase neural responses associated with word recognition when words cannot be uniquely identified. In contrast, predictive-selection accounts (e.g., Predictive-Coding) propose that spoken word recognition involves comparing heard and predicted speech sounds and using prediction error to update lexical representations. Increased lexical uncertainty in words, such as hygiene and hijack, will increase prediction error and hence neural activity only at later time points when different segments are predicted. We collected MEG data from male and female listeners to test these two Bayesian mechanisms and used a competitor priming manipulation to change the prior probability of specific words. Lexical decision responses showed delayed recognition of target words (hygiene) following presentation of a neighboring prime word (hijack) several minutes earlier. However, this effect was not observed with pseudoword primes (higent) or targets (hijure). Crucially, MEG responses in the STG showed greater neural responses for word-primed words after the point at which they were uniquely identified (after /haidʒ/ in hygiene) but not before while similar changes were again absent for pseudowords. These findings are consistent with accounts of spoken word recognition in which neural computations of prediction error play a central role.SIGNIFICANCE STATEMENT Effective speech perception is critical to daily life and involves computations that combine speech signals with prior knowledge of spoken words (i.e., Bayesian perceptual inference). This study specifies the neural mechanisms that support spoken word recognition by testing two distinct implementations of Bayes perceptual inference. Most established theories propose direct competition between lexical units such that inhibition of irrelevant candidates leads to selection of critical words. Our results instead support predictive-selection theories (e.g., Predictive-Coding): by comparing heard and predicted speech sounds, neural computations of prediction error can help listeners continuously update lexical probabilities, allowing for more rapid word identification.


Subject(s)
Recognition, Psychology/physiology , Speech Perception/physiology , Temporal Lobe/physiology , Adult , Bayes Theorem , Comprehension/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Young Adult
8.
J Cogn Neurosci ; 32(3): 403-425, 2020 03.
Article in English | MEDLINE | ID: mdl-31682564

ABSTRACT

Semantically ambiguous words challenge speech comprehension, particularly when listeners must select a less frequent (subordinate) meaning at disambiguation. Using combined magnetoencephalography (MEG) and EEG, we measured neural responses associated with distinct cognitive operations during semantic ambiguity resolution in spoken sentences: (i) initial activation and selection of meanings in response to an ambiguous word and (ii) sentence reinterpretation in response to subsequent disambiguation to a subordinate meaning. Ambiguous words elicited an increased neural response approximately 400-800 msec after their acoustic offset compared with unambiguous control words in left frontotemporal MEG sensors, corresponding to sources in bilateral frontotemporal brain regions. This response may reflect increased demands on processes by which multiple alternative meanings are activated and maintained until later selection. Disambiguating words heard after an ambiguous word were associated with marginally increased neural activity over bilateral temporal MEG sensors and a central cluster of EEG electrodes, which localized to similar bilateral frontal and left temporal regions. This later neural response may reflect effortful semantic integration or elicitation of prediction errors that guide reinterpretation of previously selected word meanings. Across participants, the amplitude of the ambiguity response showed a marginal positive correlation with comprehension scores, suggesting that sentence comprehension benefits from additional processing around the time of an ambiguous word. Better comprehenders may have increased availability of subordinate meanings, perhaps due to higher quality lexical representations and reflected in a positive correlation between vocabulary size and comprehension success.


Subject(s)
Brain/physiology , Comprehension/physiology , Semantics , Speech Perception/physiology , Adult , Electroencephalography , Female , Humans , Magnetoencephalography , Male , Vocabulary , Young Adult
9.
J Exp Psychol Learn Mem Cogn ; 44(10): 1533-1561, 2018 Oct.
Article in English | MEDLINE | ID: mdl-29389181

ABSTRACT

Research has shown that adults' lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g., bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However, the mechanism underlying this word-meaning priming effect remains unclear, and competing accounts make different predictions about the extent to which information about word meanings that is gained within one modality (e.g., speech) is transferred to the other modality (e.g., reading) to aid comprehension. In two Web-based experiments, ambiguous target words were primed with either written or spoken sentences that biased their interpretation toward a subordinate meaning, or were unprimed. About 20 min after the prime exposure, interpretation of these target words was tested by presenting them in either written or spoken form, using word association (Experiment 1, N = 78) and speeded semantic relatedness decisions (Experiment 2, N = 181). Both experiments replicated the auditory unimodal priming effect shown previously (Rodd et al., 2016, 2013) and revealed significant cross-modal priming: primed meanings were retrieved more frequently and swiftly across all primed conditions compared with the unprimed baseline. Furthermore, there were no reliable differences in priming levels between unimodal and cross-modal prime-test conditions. These results indicate that recent experience with ambiguous word meanings can bias the reader's or listener's later interpretation of these words in a modality-general way. We identify possible loci of this effect within the context of models of long-term priming and ambiguity resolution. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Generalization, Psychological , Reading , Speech Perception , Adolescent , Adult , Association , Female , Humans , Male , Middle Aged , Psycholinguistics , Repetition Priming , Semantics , Vocabulary , Young Adult
10.
J Exp Psychol Learn Mem Cogn ; 44(7): 1130-1150, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29283607

ABSTRACT

Current models of word-meaning access typically assume that lexical-semantic representations of ambiguous words (e.g., 'bark of the dog/tree') reach a relatively stable state in adulthood, with only the relative frequencies of meanings and immediate sentence context determining meaning preference. However, recent experience also affects interpretation: recently encountered word-meanings become more readily available (Rodd et al., 2016, 2013). Here, 3 experiments investigated how multiple encounters with word-meanings influence the subsequent interpretation of these ambiguous words. Participants heard ambiguous words contextually-disambiguated towards a particular meaning and, after a 20- to 30-min delay, interpretations of the words were tested in isolation. We replicate the finding that 1 encounter with an ambiguous word biased the later interpretation of this word towards the primed meaning for both subordinate (Experiments 1, 2, 3) and dominant meanings (Experiment 1). In addition, for the first time, we show cumulative effects of multiple repetitions of both the same and different meanings. The effect of a single subordinate exposure persisted after a subsequent encounter with the dominant meaning, compared to a dominant exposure alone (Experiment 1). Furthermore, 3 subordinate word-meaning repetitions provided an additional boost to priming compared to 1, although only when their presentation was spaced (Experiments 2, 3); massed repetitions provided no such boost (Experiments 1, 3). These findings indicate that comprehension is guided by the collective effect of multiple recently activated meanings and that the spacing of these activations is key to producing lasting updates to the lexical-semantic network. (PsycINFO Database Record


Subject(s)
Linguistics , Repetition Priming , Adolescent , Adult , Association , Female , Humans , Male , Random Allocation , Young Adult
11.
Cogn Psychol ; 98: 73-101, 2017 11.
Article in English | MEDLINE | ID: mdl-28881224

ABSTRACT

Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.


Subject(s)
Recognition, Psychology , Speech Perception/physiology , Speech/physiology , Adult , Comprehension , Female , Humans , Male , United Kingdom , United States
12.
Q J Exp Psychol (Hove) ; 70(12): 2403-2418, 2017 Dec.
Article in English | MEDLINE | ID: mdl-27758161

ABSTRACT

The capacity of serially ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel "rehearsal-probe" task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval. After an unpredictable delay, a tone prompted report of the item being rehearsed at that moment. An initial experiment showed cyclic distributions of item responses over time, with peaks preserving serial order and broad, overlapping tails. The spread of the response distributions increased with additional memory load and correlated negatively with participants' auditory digit spans. A second study replicated the negative correlation and demonstrated its specificity to AVSTM by controlling for differences in visuo-spatial STM and nonverbal IQ. The results are consistent with the idea that a common resource underpins both the temporal precision and capacity of AVSTM. The rehearsal-probe task may provide a valuable tool for investigating links between temporal processing and AVSTM capacity in the context of speech and language abilities.


Subject(s)
Auditory Perception/physiology , Memory, Short-Term/physiology , Verbal Learning/physiology , Acoustic Stimulation , Adolescent , Analysis of Variance , Female , Humans , Individuality , Male , Retention, Psychology/physiology , Statistics as Topic , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...