RESUMEN
People are intuitive Dualists-they tacitly consider the mind as ethereal, distinct from the body. Here we ask whether Dualism emerges naturally from the conflicting core principles that guide reasoning about objects, on the one hand, and about the minds of agents (theory of mind, ToM), on the other. To address this question, we explore Dualist reasoning in autism spectrum disorder (ASD)-a congenital disorder known to compromise ToM. If Dualism arises from ToM, then ASD ought to attenuate Dualism and promote Physicalism. In line with this prediction, Experiment 1 shows that, compared to controls, people with ASD are more likely to view psychological traits as embodied-as likely to manifest in a replica of one's body. Experiment 2 demonstrates that, unlike controls, people with ASD do not consider thoughts as disembodied-as persistent in the afterlife (upon the body's demise). If ASD promotes the perception of the psyche as embodied, and if (per Essentialism) embodiment suggests innateness, then ASD should further promote Nativism-this bias is shown in Experiment 3. Finally, Experiment 4 demonstrates that, in neurotypical (NT) participants, difficulties with ToM correlate with Physicalism. These results are the first to show that ASD attenuates Dualist reasoning and to link Dualism to ToM. These conclusions suggest that the mind-body distinction might be natural for people to entertain.
Asunto(s)
Anestésicos Generales , Trastorno del Espectro Autista , Trastorno Autístico , Humanos , Solución de Problemas , PercepciónRESUMEN
Listeners who use cochlear implants show variability in speech recognition. Research suggests that structured auditory training can improve speech recognition outcomes in cochlear implant users, and a central goal in the rehabilitation literature is to identify factors that maximize training. Here, we examined factors that may influence perceptual learning for noise-vocoded speech in normal hearing listeners as a foundational step towards clinical recommendations. Three groups of listeners were exposed to anomalous noise-vocoded sentences and completed one of three training tasks: transcription with feedback, transcription without feedback, or talker identification. Listeners completed a word transcription test at three time points: immediately before training, immediately after training, and one week following training. Accuracy at test was indexed by keyword accuracy at the sentence-initial and sentence-final position for high and low predictability noise-vocoded sentences. Following training, listeners showed improved transcription for both sentence-initial and sentence-final items, and for both low and high predictability sentences. The training groups showed robust and equivalent learning of noise-vocoded sentences immediately after training. Critically, gains were largely maintained equivalently among training groups one week later. These results converge with evidence pointing towards the utility of non-traditional training tasks to maximize perceptual learning of noise-vocoded speech.
Asunto(s)
Implantación Coclear , Implantes Cocleares , Percepción del Habla , Ruido/efectos adversos , HablaRESUMEN
Spectral properties of earlier sounds (context) influence recognition of later sounds (target). Acoustic variability in context stimuli can disrupt this process. When mean fundamental frequencies (f0's) of preceding context sentences were highly variable across trials, shifts in target vowel categorization [due to spectral contrast effects (SCEs)] were smaller than when sentence mean f0's were less variable; when sentences were rearranged to exhibit high or low variability in mean first formant frequencies (F1) in a given block, SCE magnitudes were equivalent [Assgari, Theodore, and Stilp (2019) J. Acoust. Soc. Am. 145(3), 1443-1454]. However, since sentences were originally chosen based on variability in mean f0, stimuli underrepresented the extent to which mean F1 could vary. Here, target vowels (/ɪ/-/É/) were categorized following context sentences that varied substantially in mean F1 (experiment 1) or mean F3 (experiment 2) with variability in mean f0 held constant. In experiment 1, SCE magnitudes were equivalent whether context sentences had high or low variability in mean F1; the same pattern was observed in experiment 2 for new sentences with high or low variability in mean F3. Variability in some acoustic properties (mean f0) can be more perceptually consequential than others (mean F1, mean F3), but these results may be task-dependent.
Asunto(s)
Fonética , Percepción del Habla , Sonido , Espectrografía del Sonido , Acústica del LenguajeRESUMEN
Previous research suggests that learning to use a phonetic property [e.g., voice-onset-time, (VOT)] for talker identity supports a left ear processing advantage. Specifically, listeners trained to identify two "talkers" who only differed in characteristic VOTs showed faster talker identification for stimuli presented to the left ear compared to that presented to the right ear, which is interpreted as evidence of hemispheric lateralization consistent with task demands. Experiment 1 (n = 97) aimed to replicate this finding and identify predictors of performance; experiment 2 (n = 79) aimed to replicate this finding under conditions that better facilitate observation of laterality effects. Listeners completed a talker identification task during pretest, training, and posttest phases. Inhibition, category identification, and auditory acuity were also assessed in experiment 1. Listeners learned to use VOT for talker identity, which was positively associated with auditory acuity. Talker identification was not influenced by ear of presentation, and Bayes factors indicated strong support for the null. These results suggest that talker-specific phonetic variation is not sufficient to induce a left ear advantage for talker identification; together with the extant literature, this instead suggests that hemispheric lateralization for talker-specific phonetic variation requires phonetic variation to be conditioned on talker differences in source characteristics.
Asunto(s)
Señales (Psicología) , Fonética , Teorema de Bayes , Percepción Auditiva , Discriminación en PsicologíaRESUMEN
Listeners use talker-specific phonetic structure to facilitate language comprehension. This study tests whether sensitivity to talker-specific phonetic variation also facilitates talker identification. During training, two listener groups learned to associate talkers' voices with cartoon pseudo-faces. For one group, each talker produced characteristically different voice-onset-time values; for the other group, no talker-specific phonetic structure was present. After training, listeners were tested on talker identification for trained and novel words, which was improved for those who heard structured phonetic variation compared to those who did not. These findings suggest an additive benefit of talker-specific phonetic variation for talker identification beyond traditional indexical cues.
RESUMEN
The perception of any given sound is influenced by surrounding sounds. When successive sounds differ in their spectral compositions, these differences may be perceptually magnified, resulting in spectral contrast effects (SCEs). For example, listeners are more likely to perceive /ɪ/ (low F1) following sentences with higher F1 frequencies; listeners are also more likely to perceive /É/ (high F1) following sentences with lower F1 frequencies. Previous research showed that SCEs for vowel categorization were attenuated when sentence contexts were spoken by different talkers [Assgari and Stilp. (2015). J. Acoust. Soc. Am. 138(5), 3023-3032], but the locus of this diminished contextual influence was not specified. Here, three experiments examined implications of variable talker acoustics for SCEs in the categorization of /ɪ/ and /É/. The results showed that SCEs were smaller when the mean fundamental frequency (f0) of context sentences was highly variable across talkers compared to when mean f0 was more consistent, even when talker gender was held constant. In contrast, SCE magnitudes were not influenced by variability in mean F1. These findings suggest that talker variability attenuates SCEs due to diminished consistency of f0 as a contextual influence. Connections between these results and talker normalization are considered.
RESUMEN
The current study investigates the role of language experience in generalizing indexical information across languages within bilingual speech. Participants (n = 48) learned to identify bilingual talkers speaking in one of their languages and were then tested on their ability to identify the same talker when speaking the same language and when speaking their other language. Both monolingual and bilingual participants showed above chance performance in identifying the talkers in both language contexts. However, bilingual participants outperformed monolinguals in generalizing knowledge about the speaker's voice across their two familiar languages, which may be driven by their experience with language mixing.
RESUMEN
Listeners use lexical information to resolve ambiguity in the speech signal, resulting in the restructuring of speech sound categories. Recent findings suggest that lexically guided perceptual learning is attenuated when listeners use a perception-focused listening strategy (that directs attention towards surface variation) compared to when listeners use a comprehension-focused listening strategy (that directs attention towards higher-level linguistic information). However, previous investigations used the word position of the ambiguity to manipulate listening strategy, raising the possibility that attenuated learning reflected decreased strength of lexical recruitment instead of a perception-oriented listening strategy. The current work tests this hypothesis. Listeners completed an exposure phase followed by a test phase. During exposure, listeners heard an ambiguous fricative embedded in word-medial lexical contexts that supported realization of the ambiguity as /∫/. At test, listeners categorized members of an /Ési/-/É∫i/ continuum. Listening strategy was manipulated via exposure task (experiment 1) and explicit acknowledgement of the ambiguity (experiment 2). Compared to control participants, listeners who were exposed to the ambiguity showed more /∫/ responses at the test; critically, the magnitude of learning did not differ across listening strategy conditions. These results suggest that given sufficient lexical context, lexically guided perceptual learning is robust to task-based changes in listening strategy.
Asunto(s)
Lenguaje , Aprendizaje , Percepción del Habla , Adolescente , Adulto , Femenino , Humanos , MasculinoRESUMEN
Listeners use lexical information to retune the mapping between the acoustic signal and speech sound representations, resulting in changes to phonetic category boundaries. Other research shows that phonetic categories have a rich internal structure; within-category variation is represented in a graded fashion. The current work examined whether lexically informed perceptual learning promotes a comprehensive reorganization of internal category structure. The results showed a reorganization of internal structure for one but not both of the examined categories, which may reflect an attenuation of learning for distributions with extensive category overlap. This finding points towards potential input-driven constraints on lexically guided phonetic retuning.
RESUMEN
Research suggests that phonological ability exerts a gradient influence on talker identification, including evidence that adults and children with reading disability show impaired talker recognition for native and non-native languages. The present study examined whether this relationship is also observed among unimpaired readers. Learning rate and generalization of learning in a talker identification task were examined in average and advanced readers who were tested in both native and non-native language conditions. The results indicate that even among unimpaired readers, phonological competence as captured by reading ability exerts a gradient influence on perceptual learning for talkers' voices.
Asunto(s)
Lectura , Reconocimiento en Psicología/fisiología , Voz/fisiología , Estimulación Acústica , Adolescente , Análisis de Varianza , Femenino , Humanos , Lenguaje , Aprendizaje , Masculino , Fonética , Percepción del Habla/fisiología , Adulto JovenRESUMEN
A primary goal for models of speech perception is to describe how listeners achieve reliable comprehension given a lack of invariance between the acoustic signal and individual speech sounds. For example, individual talkers differ in how they implement phonetic properties of speech. Research suggests that listeners attain perceptual constancy by processing acoustic variation categorically while maintaining graded internal category structure. Moreover, listeners will use lexical information to modify category boundaries to learn to interpret a talker's ambiguous productions. The current work examines perceptual learning for talker differences that signal well-defined, unambiguous category members. Speech synthesis techniques were used to differentially manipulate talkers' characteristic productions of the stop voicing contrast for two groups of listeners. Following exposure to the talkers, internal category structure and category boundary were examined. The results showed that listeners dynamically adjusted internal category structure to be centered on experience with the talker's voice, but the category boundary remained fixed. These patterns were observed for words presented during training as well as novel lexical items. These findings point to input-driven constraints on functional plasticity within the language architecture, which may help to explain how listeners maintain stability of linguistic knowledge while simultaneously demonstrating flexibility for phonetic representations.
Asunto(s)
Fonética , Inteligibilidad del Habla , Percepción del Habla/fisiología , Estimulación Acústica , Adolescente , Adulto , Clasificación , Retroalimentación Psicológica , Femenino , Humanos , Aprendizaje , Masculino , Memoria , Factores de Tiempo , Voz , Adulto JovenRESUMEN
PURPOSE: Numerous tasks have been developed to measure receptive vocabulary, many of which were designed to be administered in person with a trained researcher or clinician. The purpose of the current study is to compare a common, in-person test of vocabulary with other vocabulary assessments that can be self-administered. METHOD: Fifty-three participants completed the Peabody Picture Vocabulary Test (PPVT) via online video call to mimic in-person administration, as well as four additional fully automated, self-administered measures of receptive vocabulary. Participants also completed three control tasks that do not measure receptive vocabulary. RESULTS: Pearson correlations indicated moderate correlations among most of the receptive vocabulary measures (approximately r = .50-.70). As expected, the control tasks revealed only weak correlations to the vocabulary measures. However, subsets of items of the four self-administered measures of receptive vocabulary achieved high correlations with the PPVT (r > .80). These subsets were found through a repeated resampling approach. CONCLUSIONS: Measures of receptive vocabulary differ in which items are included and in the assessment task (e.g., lexical decision, picture matching, synonym matching). The results of the current study suggest that several self-administered tasks are able to achieve high correlations with the PPVT when a subset of items are scored, rather than the full set of items. These data provide evidence that subsets of items on one behavioral assessment can more highly correlate to another measure. In practical terms, these data demonstrate that self-administered, automated measures of receptive vocabulary can be used as reasonable substitutes of at least one test (PPVT) that requires human interaction. That several of the fully automated measures resulted in high correlations with the PPVT suggests that different tasks could be selected depending on the needs of the researcher. It is important to note the aim was not to establish clinical relevance of these measures, but establish whether researchers could use an experimental task of receptive vocabulary that probes a similar construct to what is captured by the PPVT, and use these measures of individual differences.
Asunto(s)
Vocabulario , Humanos , Pruebas de InteligenciaRESUMEN
Research suggests that individuals differ in the degree to which they rely on lexical information to support speech perception. However, the locus of these differences is not yet known; nor is it known whether these individual differences reflect a context-dependent "state" or a stable listener "trait." Here we test the hypothesis that individual differences in lexical reliance are a stable trait that is linked to individuals' relative weighting of lexical and acoustic-phonetic information for speech perception. At each of two sessions, listeners (n = 73) completed a Ganong task, a phonemic restoration task, and a locally time-reversed speech task - three tasks that have been used to demonstrate a lexical influence on speech perception. Robust lexical effects on speech perception were observed for each task in the aggregate. Individual differences in lexical reliance were stable across sessions; however, relationships among the three tasks in each session were weak. For the Ganong and locally time-reversed speech tasks, increased reliance on lexical information was associated with weaker reliance on acoustic-phonetic information. Collectively, these results (1) provide some evidence to suggest that individual differences in lexical reliance for a given task are a stable reflection of the relative weighting of acoustic-phonetic and lexical cues for speech perception in that task, and (2) highlight the need for a better understanding of the psychometric characteristics of tasks used in the psycholinguistic domain to build theories that can accommodate individual differences in mapping speech to meaning.
Asunto(s)
Individualidad , Percepción del Habla , Humanos , Señales (Psicología) , Psicolingüística , FonéticaRESUMEN
There is wide variability in the acoustic patterns that are produced for a given linguistic message, including variability that is conditioned on who is speaking. Listeners solve this lack of invariance problem, at least in part, by dynamically modifying the mapping to speech sounds in response to structured variation in the input. Here we test a primary tenet of the ideal adapter framework of speech adaptation, which posits that perceptual learning reflects the incremental updating of cue-sound mappings to incorporate observed evidence with prior beliefs. Our investigation draws on the influential lexically guided perceptual learning paradigm. During an exposure phase, listeners heard a talker who produced fricative energy ambiguous between /Ê/ and /s/. Lexical context differentially biased interpretation of the ambiguity as either /s/ or /Ê/, and, across two behavioral experiments (n = 500), we manipulated the quantity of evidence and the consistency of evidence that was provided during exposure. Following exposure, listeners categorized tokens from an ashi - asi continuum to assess learning. The ideal adapter framework was formalized through computational simulations, which predicted that learning would be graded to reflect the quantity, but not the consistency, of the exposure input. These predictions were upheld in human listeners; the magnitude of the learning effect monotonically increased given exposure to four, 10, or 20 critical productions, and there was no evidence that learning differed given consistent versus inconsistent exposure. These results (1) provide support for a primary tenet of the ideal adapter framework, (2) establish quantity of evidence as a key determinant of adaptation in human listeners, and (3) provide critical evidence that lexically guided perceptual learning is not a binary outcome. In doing so, the current work provides foundational knowledge to support theoretical advances that consider perceptual learning as a graded outcome that is tightly linked to input statistics in the speech stream.
Asunto(s)
Percepción del Habla , Habla , Humanos , Percepción del Habla/fisiología , Aprendizaje/fisiología , Audición , FonéticaRESUMEN
PURPOSE: Sleep-based memory consolidation has been shown to facilitate perceptual learning of atypical speech input including nonnative speech sounds, accented speech, and synthetic speech. The current research examined the role of sleep-based memory consolidation on perceptual learning for noise-vocoded speech, including maintenance of learning over a 1-week time interval. Because comprehending noise-vocoded speech requires extensive restructuring of the mapping between the acoustic signal and prelexical representations, sleep consolidation may be critical for this type of adaptation. Thus, the purpose of this study was to investigate the role of sleep-based memory consolidation on adaptation to noise-vocoded speech in listeners without hearing loss as a foundational step toward identifying parameters that can be useful to consider for auditory training with clinical populations. METHOD: Two groups of normal-hearing listeners completed a transcription training task with feedback for noise-vocoded sentences in either the morning or the evening. Learning was assessed through transcription accuracy before training, immediately after training, 12 hr after training, and 1 week after training for both trained and novel sentences. RESULTS: Both the morning and evening groups showed improved comprehension of noise-vocoded sentences immediately following training. Twelve hours later, the evening group showed stable gains (following a period of sleep), whereas the morning group demonstrated a decline in gains (following a period of wakefulness). One week after training, the morning and evening groups showed equivalent performance for both trained and novel sentences. CONCLUSION: Sleep-consolidated learning helps stabilize training gains for degraded speech input, which may hold clinical utility for optimizing rehabilitation recommendations.
Asunto(s)
Consolidación de la Memoria , Percepción del Habla , Humanos , Habla , Aprendizaje , Sueño , Estimulación Acústica , Enmascaramiento PerceptualRESUMEN
Listeners show perceptual benefits (faster and/or more accurate responses) when perceiving speech spoken by a single talker versus multiple talkers, known as talker adaptation. While near-exclusively studied in speech and with talkers, some aspects of talker adaptation might reflect domain-general processes. Music, like speech, is a sound class replete with acoustic variation, such as a multitude of pitch and instrument possibilities. Thus, it was hypothesized that perceptual benefits from structure in the acoustic signal (i.e., hearing the same sound source on every trial) are not specific to speech but rather a general auditory response. Forty nonmusician participants completed a simple musical task that mirrored talker adaptation paradigms. Low- or high-pitched notes were presented in single- and mixed-instrument blocks. Reflecting both music research on pitch and timbre interdependence and mirroring traditional "talker" adaptation paradigms, listeners were faster to make their pitch judgments when presented with a single instrument timbre relative to when the timbre was selected from one of four instruments from trial to trial. A second experiment ruled out the possibility that participants were responding faster to the specific instrument chosen as the single-instrument timbre. Consistent with general theoretical approaches to perception, perceptual benefits from signal structure are not limited to speech.
Asunto(s)
Música , Percepción del Habla , Humanos , Percepción de la Altura Tonal/fisiología , Audición , Pruebas Auditivas , Percepción del Habla/fisiologíaRESUMEN
The goal of the current work was to develop and validate web-based measures for assessing English vocabulary knowledge. Two existing paper-and-pencil assessments, the Vocabulary Size Test (VST) and the Word Familiarity Test (WordFAM), were modified for web-based administration. In Experiment 1, participants (n = 100) completed the web-based VST. In Experiment 2, participants (n = 100) completed the web-based WordFAM. Results from these experiments confirmed that both tasks (1) could be completed online, (2) showed expected sensitivity to English frequency patterns, (3) exhibited high internal consistency, and (4) showed an expected range of item discrimination scores, with low frequency items exhibiting higher item discrimination scores compared to high frequency items. This work provides open-source English vocabulary knowledge assessments with normative data that researchers can use to foster high quality data collection in web-based environments.
RESUMEN
Two measures for assessing English vocabulary knowledge, the Vocabulary Size Test (VST) and the Word Familiarity Test (WordFAM), were recently validated for web-based administration. An analysis of the psychometric properties of these assessments revealed high internal consistency, suggesting that stable assessment could be achieved with fewer test items. Because researchers may use these assessments in conjunction with other experimental tasks, the utility may be enhanced if they are shorter in duration. To this end, two "brief" versions of the VST and the WordFAM were developed and submitted to validation testing. Each version consisted of approximately half of the items from the full assessment, with novel items across each brief version. Participants (n = 85) completed one brief version of both the VST and the WordFAM at session one, followed by the other brief version of each assessment at session two. The results showed high test-retest reliability for both the VST (r = 0.68) and the WordFAM (r = 0.82). The assessments also showed moderate convergent validity (ranging from r = 0.38 to 0.59), indicative of assessment validity. This work provides open-source English vocabulary knowledge assessments with normative data that researchers can use to foster high quality data collection in web-based environments.
RESUMEN
To identify a spoken word (e.g., dog), people must categorize the speech steam onto distinct units (e.g., contrast dog/fog,) and extract their combinatorial structure (e.g., distinguish dog/god). However, the mechanisms that support these two core functions are not fully understood. Here, we explore this question using transcranial magnetic stimulation (TMS). We show that speech categorization engages the motor system, as stimulating the lip motor area has opposite effects on labial (ba/pa)- and coronal (da/ta) sounds. In contrast, the combinatorial computation of syllable structure engages Broca's area, as its stimulation disrupts sensitivity to syllable structure (compared to motor stimulation). We conclude that the two ingredients of language-categorization and combination-are distinct functions in human brains.
Asunto(s)
Corteza Motora , Fonética , Percepción del Habla , Humanos , Lenguaje , Corteza Motora/fisiología , Habla/fisiología , Percepción del Habla/fisiología , Estimulación Magnética TranscranealRESUMEN
Theories suggest that speech perception is informed by listeners' beliefs of what phonetic variation is typical of a talker. A previous fMRI study found right middle temporal gyrus (RMTG) sensitivity to whether a phonetic variant was typical of a talker, consistent with literature suggesting that the right hemisphere may play a key role in conditioning phonetic identity on talker information. The current work used transcranial magnetic stimulation (TMS) to test whether the RMTG plays a causal role in processing talker-specific phonetic variation. Listeners were exposed to talkers who differed in how they produced voiceless stop consonants while TMS was applied to RMTG, left MTG, or scalp vertex. Listeners subsequently showed near-ceiling performance in indicating which of two variants was typical of a trained talker, regardless of previous stimulation site. Thus, even though the RMTG is recruited for talker-specific phonetic processing, modulation of its function may have only modest consequences.