Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 193
Filtrar
1.
Front Psychol ; 15: 1357975, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39135868

RESUMO

Introduction: This study aimed to explore the arousal and valence that people experience in response to Hangul phonemes based on the gender of an AI speaker through comparison with Korean and Chinese cultures. Methods: To achieve this, 42 Hangul phonemes were used, in a combination of three Korean vowels and 14 Korean consonants, to explore cultural differences in arousal, valence, and the six foundational emotions based on the gender of an AI speaker. A total 136 Korean and Chinese women were recruited and randomly assigned to one of two conditions based on voice gender (man or woman). Results and discussion: This study revealed significant differences in arousal levels between Korean and Chinese women when exposed to male voices. Specifically, Chinese women exhibited clear differences in emotional perceptions of male and female voices in response to voiced consonants. These results confirm that arousal and valence may differ with articulation types and vowels due to cultural differences and that voice gender can affect perceived emotions. This principle can be used as evidence for sound symbolism and has practical implications for voice gender and branding in AI applications.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38908790

RESUMO

INTRODUCTION: Human beings are constantly exposed to complex acoustic environments every day, which even pose challenges for individuals with normal hearing. Speech perception relies not only on fixed elements within the acoustic wave but is also influenced by various factors. These factors include speech intensity, environmental noise, the presence of other speakers, individual specific characteristics, spatial separatios of sound sources, ambient reverberation, and audiovisual cues. The objective of this study is twofold: to determine the auditory capacity of normal hearing individuals to discriminate spoken words in real-life acoustic conditions and perform a phonetic analysis of misunderstood spoken words. MATERIALS AND METHODS: This is a descriptive observational cross-sectional study involving 20 normal hearing individuals. Verbal audiometry was conducted in an open-field environment, with sounds masked by simulated real-word acoustic environment at various sound intensity levels. To enhance sound emission, 2D visual images related to the sounds were displayed on a television. We analyzed the percentage of correct answers and performed a phonetic analysis of misunderstood Spanish bisyllabic words in each environment. RESULTS: 14 women (70%) and 6 men (30%), with an average age of 26 ±â€¯5,4 years and a mean airway hearing threshold in the right ear of 10,56 ±â€¯3,52 dB SPL and in the left ear of 10,12 ±â€¯2,49 dB SPL. The percentage of verbal discrimination in the "Ocean" sound environment was 97,2 ±â€¯5,04%, "Restaurant" was 94 ±â€¯4,58%, and "Traffic" was 86,2 ±â€¯9,94% (p = 0,000). Regarding the phonetic analysis, the allophones that exhibited statistically significant differences were as follows: [o] (p = 0,002) within the group of vocalic phonemes, [n] (p = 0,000) of voiced nasal consonants, [r] (p = 0,0016) of voiced fricatives, [b] (p = 0,000) and [g] (p = 0,045) of voiced stops. CONCLUSION: The dynamic properties of the acoustic environment can impact the ability of a normal hearing individual to extract information from a voice signal. Our study demonstrates that this ability decreases when the voice signal is masked by one or more simultaneous interfering voices, as observed in a "Restaurant" environment, and when it is masked by a continuous and intense noise environment such as "Traffic". Regarding the phonetic analysis, when the sound environment was composed of continuous-low frequency noise, we found that nasal consonants were particularly challenging to identify. Furthermore in situations with distracting verbal signals, vowels and vibrating consonants exhibited the worst intelligibility.

3.
Behav Res Methods ; 2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38594441

RESUMO

This work introduces the English Sublexical Toolkit, a suite of tools that utilizes an experience-dependent learning framework of sublexical knowledge to extract regularities from the English lexicon. The Toolkit quantifies the empirical regularity of sublexical units in both the reading and spelling directions (i.e., grapheme-to-phoneme and phoneme-to-grapheme) and at multiple grain sizes (i.e., phoneme/grapheme and onset/rime unit size). It can extract multiple experience-dependent regularity indices for words or pseudowords, including both frequency indices (e.g., grapheme frequency) and conditional probability indices (e.g., grapheme-to-phoneme probability). These tools provide (1) superior estimates of the regularities that better reflect the complexity of the sublexical system relative to previously published indices and (2) completely novel indices of sublexical units such as phonographeme frequency (i.e., combined units of individual phonemes and graphemes that are independent of processing direction). We demonstrate that measures from the toolkit explain significant amounts of variance in empirical data (naming of real words and lexical decision), and either outperform or are comparable to the best available consistency measures. The flexibility of the toolkit is further demonstrated by its ability to readily index the probability of different pseudowords pronunciations, and we report that the measures account for the majority of variance in these empirically observed probabilities. Overall, this work provides a framework and resources that can be flexibly used to identify optimal corpus-based consistency measures that help explain reading/spelling behaviors for real and pseudowords.

4.
Audiol Res ; 14(2): 264-279, 2024 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-38525685

RESUMO

BACKGROUND: The Chear open-set performance test (COPT), which uses a carrier phrase followed by a monosyllabic test word, is intended for clinical assessment of speech recognition, evaluation of hearing-device performance, and the fine-tuning of hearing devices for speakers of British English. This paper assesses practice effects, test-retest reliability, and the variability across lists of the COPT. METHOD: In experiment 1, 16 normal-hearing participants were tested using an initial version of the COPT, at three speech-to-noise ratios (SNRs). Experiment 2 used revised COPT lists, with items swapped between lists to reduce differences in difficulty across lists. In experiment 3, test-retest repeatability was assessed for stimuli presented in quiet, using 15 participants with sensorineural hearing loss. RESULTS: After administration of a single practice list, no practice effects were evident. The critical difference between scores for two lists was about 2 words (out of 15) or 5 phonemes (out of 50). The mean estimated SNR required for 74% words correct was -0.56 dB, with a standard deviation across lists of 0.16 dB. For the participants with hearing loss tested in quiet, the critical difference between scores for two lists was about 3 words (out of 15) or 6 phonemes (out of 50).

5.
JMIRx Med ; 5: e49969, 2024 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-38345294

RESUMO

Background: High-frequency hearing loss is one of the most common problems in the aging population and with those who have a history of exposure to loud noises. This type of hearing loss can be frustrating and disabling, making it difficult to understand speech communication and interact effectively with the world. Objective: This study aimed to examine the impact of spatially unique haptic vibrations representing high-frequency phonemes on the self-perceived ability to understand conversations in everyday situations. Methods: To address high-frequency hearing loss, a multi-motor wristband was developed that uses machine learning to listen for specific high-frequency phonemes. The wristband vibrates in spatially unique locations to represent which phoneme was present in real time. A total of 16 participants with high-frequency hearing loss were recruited and asked to wear the wristband for 6 weeks. The degree of disability associated with hearing loss was measured weekly using the Abbreviated Profile of Hearing Aid Benefit (APHAB). Results: By the end of the 6-week study, the average APHAB benefit score across all participants reached 12.39 points, from a baseline of 40.32 to a final score of 27.93 (SD 13.11; N=16; P=.002, 2-tailed dependent t test). Those without hearing aids showed a 10.78-point larger improvement in average APHAB benefit score at 6 weeks than those with hearing aids (t14=2.14; P=.10, 2-tailed independent t test). The average benefit score across all participants for ease of communication was 15.44 (SD 13.88; N=16; P<.001, 2-tailed dependent t test). The average benefit score across all participants for background noise was 10.88 (SD 17.54; N=16; P=.03, 2-tailed dependent t test). The average benefit score across all participants for reverberation was 10.84 (SD 16.95; N=16; P=.02, 2-tailed dependent t test). Conclusions: These findings show that vibrotactile sensory substitution delivered by a wristband that produces spatially distinguishable vibrations in correspondence with high-frequency phonemes helps individuals with high-frequency hearing loss improve their perceived understanding of verbal communication. Vibrotactile feedback provides benefits whether or not a person wears hearing aids, albeit in slightly different ways. Finally, individuals with the greatest perceived difficulty understanding speech experienced the greatest amount of perceived benefit from vibrotactile feedback.

6.
Behav Res Methods ; 56(4): 2751-2764, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38361097

RESUMO

Child-directed print corpora enable systematic psycholinguistic investigations, but this research infrastructure is not available in many understudied languages. Moreover, researchers of understudied languages are dependent on manual tagging because precise automatized parsers are not yet available. One plausible way forward is to limit the intensive work to a small-sized corpus. However, with little systematic enquiry about approaches to corpus construction, it is unclear how robust a small corpus can be made. The current study examines the potential of a non-sequential sampling protocol for small corpus development (NSP-SCD) through a cross-corpora and within-corpus analysis. A corpus comprising 17,584 words was developed by applying the protocol to a larger corpus of 150,595 words from children's books for 3-to-10-year-olds. While the larger corpus will by definition have more instances of unique words and unique orthographic units, still, the selectively sampled small corpus approximated the larger corpus for lexical and orthographic diversity and was equivalent for orthographic representation and word length. Psycholinguistic complexity increased by book level and varied by parts of speech. Finally, in a robustness check of lexical diversity, the non-sequentially sampled small corpus was more efficient compared to a same-sized corpus constructed by simply using all sentences from a few books (402 books vs. seven books). If a small corpus must be used then non-sequential sampling from books stratified by book level makes the corpus statistics better approximate what is found in larger corpora. Overall, the protocol shows promise as a tool to advance the science of child language acquisition in understudied languages.


Assuntos
Idioma , Psicolinguística , Humanos , Psicolinguística/métodos , Criança , Pré-Escolar , Leitura , Vocabulário , Masculino , Feminino , Desenvolvimento da Linguagem
7.
Phonetica ; 81(2): 221-264, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38095565

RESUMO

The present article describes a modified and extended replication of a corpus study by Brewer (2008. Phonetic reflexes of orthographic characteristics in lexical representation. Tucson, AZ: University of Arizona PhD thesis) which reports differences in the acoustic duration of homophonous but heterographic sounds. The original findings point to a quantity effect of spelling on acoustic duration, i.e., the more letters are used to spell a sound, the longer the sound's duration. Such a finding would have extensive theoretical implications and necessitate more research on how exactly spelling would come to influence speech production. However, the effects found by Brewer (2008) did not consistently reach statistical significance and the analysis did not include many of the covariates which are known by now to influence segment duration, rendering the robustness of the results at least questionable. Employing a more nuanced operationalization of graphemic units and a more advanced statistical analysis, the current replication fails to find the reported effect of letter quantity. Instead, we find an effect of graphemic complexity. Speakers realize consonants that do not have a visible graphemic correlate with shorter durations: the /s/ in tux is shorter that the /s/ in fuss. The effect presumably resembles orthographic visibility effects found in perception. In addition, our results highlight the need for a more rigorous approach to replicability in linguistics.


Assuntos
Idioma , Fonética , Humanos , Fala , Acústica , Projetos de Pesquisa
8.
Br J Educ Psychol ; 94(1): 282-305, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37984992

RESUMO

BACKGROUND: Despite evidence that synthetic phonics teaching has increased reading attainments, a sizable minority of children struggle to acquire phonics skills and teachers lack clear principles for deciding what types of additional support are most beneficial. Synthetic phonics teaches children to read using a decoding strategy to translate letters into sounds and blend them (e.g., c-a-t = "k - ae - t" = "cat"). To use a decoding strategy, children require letter-sound knowledge (LSK) and the ability to blend sound units (phonological awareness; PA). Training on PA has been shown to benefit struggling beginning readers. However, teachers in English primary schools do not routinely check PA. Instead, struggling beginner readers usually receive additional LSK support. AIMS: Until now, there has been no systematic comparison of the effectiveness of training on each component of the decoding process. Should additional support for struggling readers focus on improving PA, or on supplementary LSK and/or decoding instruction? We aim to increase understanding of the roles of LSK and PA in children's acquisition of phonics skills and uncover which types of additional training are most likely to be effective for struggling beginner readers. SAMPLE AND METHOD: We will compare training on each of these components, using a carefully controlled experimental design. We will identify reception-age children at risk of reading difficulties (target n = 225) and randomly allocate them to either PA, LSK or decoding (DEC) training. We will test whether training type influences post-test performance on word reading and whether any effects depend on participants' pre-test PA and/or LSK. RESULTS AND CONCLUSIONS: Two hundred and twenty-two participants completed the training. Planned analyses showed no effects of condition on word reading. However, exploratory analyses indicated that the advantage of trained over untrained words was significantly greater for the PA and DEC conditions. There was also a significantly greater improvement in PA for the DEC condition. Overall, our findings suggest a potential advantage of training that includes blending skills, particularly when decoding words that had been included in training. Future research is needed to develop a programme of training on blending skills combined with direct vocabulary instruction for struggling beginner readers.


Assuntos
Pessoal de Educação , Fonética , Criança , Humanos , Cognição , Leitura , Vocabulário
9.
Sensors (Basel) ; 23(24)2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38139496

RESUMO

Problem: Phonetic transcription is crucial in diagnosing speech sound disorders (SSDs) but is susceptible to transcriber experience and perceptual bias. Current forced alignment (FA) tools, which annotate audio files to determine spoken content and its placement, often require manual transcription, limiting their effectiveness. Method: We introduce a novel, text-independent forced alignment model that autonomously recognises individual phonemes and their boundaries, addressing these limitations. Our approach leverages an advanced, pre-trained wav2vec 2.0 model to segment speech into tokens and recognise them automatically. To accurately identify phoneme boundaries, we utilise an unsupervised segmentation tool, UnsupSeg. Labelling of segments employs nearest-neighbour classification with wav2vec 2.0 labels, before connectionist temporal classification (CTC) collapse, determining class labels based on maximum overlap. Additional post-processing, including overfitting cleaning and voice activity detection, is implemented to enhance segmentation. Results: We benchmarked our model against existing methods using the TIMIT dataset for normal speakers and, for the first time, evaluated its performance on the TORGO dataset containing SSD speakers. Our model demonstrated competitive performance, achieving a harmonic mean score of 76.88% on TIMIT and 70.31% on TORGO. Implications: This research presents a significant advancement in the assessment and diagnosis of SSDs, offering a more objective and less biased approach than traditional methods. Our model's effectiveness, particularly with SSD speakers, opens new avenues for research and clinical application in speech pathology.


Assuntos
Percepção da Fala , Voz , Humanos , Fonética , Fala , Patologistas
10.
Front Psychol ; 14: 1232262, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38023001

RESUMO

Introduction: The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods: Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results: No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion: Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.

11.
Clin Neurophysiol ; 156: 228-241, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37988851

RESUMO

OBJECTIVE: We explored neural components in Electroencephalography (EEG) signals during a phonological processing task to assess (a) the neural origins of Baddeley's working-memory components contributing to phonological processing, (b) the unitary structure of phonological processing and (c) the neural differences between children with dyslexia (DYS) and controls (CAC). METHODS: EEG data were collected from sixty children (half with dyslexia) while performing the initial- and final- phoneme elision task. We explored a novel machine-learning-based approach to identify the neural components in EEG elicited in response to the two conditions and capture differences between DYS and CAC. RESULTS: Our method identifies two sets of phoneme-related neural congruency components capturing neural activations distinguishing DYS and CAC across conditions. CONCLUSIONS: Neural congruency components capture the underlying neural mechanisms that drive the relationship between phonological deficits and dyslexia and provide insights into the phonological loop and visual-sketchpad dimensions in Baddeley's model at the neural level. They also confirm the unitary structure of phonological awareness with EEG data. SIGNIFICANCE: Our findings provide novel insights into the neural origins of the phonological processing differences in children with dyslexia, the unitary structure of phonological awareness, and further verify Baddeley's model as a theoretical framework for phonological processing and dyslexia.


Assuntos
Dislexia , Fonética , Criança , Humanos , Dislexia/diagnóstico , Memória de Curto Prazo , Leitura
12.
J Autism Dev Disord ; 2023 Oct 03.
Artigo em Inglês | MEDLINE | ID: mdl-37787847

RESUMO

Current theories of Autism Spectrum Disorder (ASD) suggest atypical use of context in ASD, but little is known about how these atypicalities influence speech perception. We examined the influence of contextual information (lexical, spectral, and temporal) on phoneme categorization of people with ASD and in typically developed (TD) people. Across three experiments, we found that people with ASD used all types of contextual information for disambiguating speech sounds to the same extent as TD; yet they exhibited a shallower identification curve when phoneme categorization required temporal processing. Overall, the results suggest that the observed atypicalities in speech perception in ASD, including the reduced sensitivity observed here, cannot be attributed merely to the limited ability to utilize context during speech perception.

13.
Neuroimage ; 284: 120428, 2023 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-37890563

RESUMO

During the last trimester of gestation, fetuses and preterm neonates begin to respond to sensory stimulation and to discover the structure of their environment. Yet, neuronal migration is still ongoing. This late migration notably concerns the supra-granular layers neurons, which are believed to play a critical role in encoding predictions and detecting regularities. In order to gain a deeper understanding of how the brain processes and perceives regularities during this stage of development, we conducted a study in which we recorded event-related potentials (ERP) in 31-wGA preterm and full-term neonates exposed to alternating auditory sequences (e.g. "ba ga ba ga ba"), when the regularity of these sequences was violated by a repetition (e.g., ``ba ga ba ga ga''). We compared the ERPs in this case to those obtained when violating a simple repetition pattern ("ga ga ga ga ga" vs. "ga ga ga ga ba"). Our results indicated that both preterm and full-term neonates were able to detect violations of regularity in both types of sequences, indicating that as early as 31 weeks gestational age, human neonates are sensitive to the conditional statistics between successive auditory elements. Full-term neonates showed an early and similar mismatch response (MMR) in the repetition and alternating sequences. In contrast, 31-wGA neonates exhibited a two-component MMR. The first component which was only observed for simple sequences with repetition, corresponded to sensory adaptation. It was followed much later by a deviance-detection component that was observed for both alternation and repetition sequences. This pattern confirms that MMRs detected at the scalp may correspond to a dual cortical process and shows that deviance detection computed by higher-level regions accelerates dramatically with brain maturation during the last weeks of gestation to become indistinguishable from bottom-up sensory adaptation at term.


Assuntos
Encéfalo , Eletroencefalografia , Recém-Nascido , Feminino , Humanos , Estimulação Acústica , Encéfalo/fisiologia , Potenciais Evocados , Mapeamento Encefálico , Potenciais Evocados Auditivos/fisiologia
14.
Artigo em Inglês | MEDLINE | ID: mdl-37701064

RESUMO

In this paper, we propose a method for removing linguistic information from speech for the purpose of isolating paralinguistic indicators of affect. The immediate utility of this method lies in clinical tests of sensitivity to vocal affect that are not confounded by language, which is impaired in a variety of clinical populations. The method is based on simultaneous recordings of speech audio and electroglotto-graphic (EGG) signals. The speech audio signal is used to estimate the average vocal tract filter response and amplitude envelop. The EGG signal supplies a direct correlate of voice source activity that is mostly independent of phonetic articulation. These signals are used to create a third signal designed to capture as much paralinguistic information from the vocal production system as possible-maximizing the retention of bioacoustic cues to affect-while eliminating phonetic cues to verbal meaning. To evaluate the success of this method, we studied the perception of corresponding speech audio and transformed EGG signals in an affect rating experiment with online listeners. The results show a high degree of similarity in the perceived affect of matched signals, indicating that our method is effective.

15.
J Integr Neurosci ; 22(5): 112, 2023 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-37735128

RESUMO

BACKGROUND: The perception of basic emotional sounds, such as crying and laughter is associated with effective interpersonal communication. Difficulties with the perception and analysis of sounds that complicate understanding emotions at an early development age may contribute to communication deficits. METHODS: This study focused on auditory nonverbal emotional perception including emotional vocalizations with opposite valences (crying and laughter) and neutral sound (phoneme "Pᴂ"). We conducted event-related potential analysis and compared peak alpha frequencies (PAFs) for different conditions in children with autism spectrum disorder (ASD) and typically developing (TD) children aged 4 to 6 years old (N = 25 for each group). RESULTS: Children with ASD had a higher amplitude of P100 and lower amplitude of N200 for all types of sounds and higher P270 in response to neutral phoneme. During the perception of emotional sounds, children with ASD demonstrated a single P270 electroencephalography (EEG) component instead of a P200-P300 complex specific to TD children. However, the most significant differences were associated with a response to emotional valences of stimuli. The EEG differences between crying and laughter were expressed as a lower amplitude of N400 and higher PAF for crying compared to laughter and were found only in TD children. CONCLUSIONS: Children with ASD have shown not just abnormal acoustical perception but altered emotional analysis of affective sounds as well.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Criança , Humanos , Feminino , Masculino , Pré-Escolar , Eletroencefalografia , Potenciais Evocados , Emoções , Percepção Auditiva
16.
Gen Dent ; 71(5): 30-33, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37595080

RESUMO

This case report describes a patient with a primary concern of persistent mandibular deviation during speech who experienced clinically significant improvement (mandibular movement without deviation) after improvements to nasal resistance. At the initial consultation, temporary placement of a nasal valve dilator immediately eliminated the patient's mandibular deviation during speech, indicating the need for referral to an otolaryngologist. The patient was also provided with a dental appliance to address secondary concerns of temporomandibular joint noises and cervicofacial pain. Although the dental treatment provided some relief, resolution of the patient's mandibular deviation during speech did not occur until after nasal surgery was completed. This case illustrates the importance and effects of nasal resistance and nasal patency to obtaining a reproducible mandibular position.


Assuntos
Prostodontia , Transtornos da Articulação Temporomandibular , Humanos , Mandíbula
17.
Neuropsychologia ; 188: 108624, 2023 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-37328027

RESUMO

Poor phonological awareness is associated with greater risk for reading disability. The underlying neural mechanism of such association may lie in the brain processing of phonological information. Lower amplitude of auditory mismatch negativity (MMN) has been associated with poor phonological awareness and with the presence of reading disability. The current study recorded auditory MMN to phoneme and lexical tone contrast with odd-ball paradigm and examined whether auditory MMN mediated the associations between phonological awareness and character reading ability through a three-year longitudinal study in 78 native Mandarin-speaking kindergarten children. Hierarchical linear regression and mediation analyses showed that the effect of phoneme awareness on the character reading ability was mediated by the phonemic MMN in young Chinese children. Findings underscore the key role of phonemic MMN as the underlying neurodevelopmental mechanism linking phoneme awareness and reading ability.


Assuntos
Dislexia , Fonética , Leitura , Criança , Humanos , Encéfalo , População do Leste Asiático , Estudos Longitudinais
18.
Int J Psychophysiol ; 190: 69-83, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37301445

RESUMO

BACKGROUND & AIMS: The mismatch negativity (MMN) and P300 event-related potentials (ERPs) have been studied in relation to phoneme discrimination and categorization, respectively. Although the effects of aging and sex on pure-tone perception have been widely investigated using these ERPs, evidence relating to phoneme perception is scarce. The current study aimed to provide insight into the effects of aging and sex on phoneme discrimination and categorization, as measured through the MMN and P300. METHOD: An inattentive and attentive oddball paradigm containing a phonemic articulation place contrast were administered during EEG registration in sixty healthy individuals (thirty males and females), of which an equal number of young (20-39 years), middle-aged (40-59 years) and elderly (60+ years) subjects were included. The amplitude, onset latency and topographical distribution of the MMN and P300 effect, as well as the amplitude of the P1-N1-P2 complex, were analyzed for age group and sex differences. RESULTS: With respect to aging, elderly subjects demonstrated a reduced MMN and P300 amplitude compared to the young group, whereas the scalp distribution of both components was unaffected. No aging effects on the P1-N1-P2 complex were found. In elderly individuals, the P300 was found to be delayed compared to the young group, while no such effect on MMN latency could be observed. No differences in MMN and P300 measures could be identified between males and females. CONCLUSION: Differential effects of aging were found on the MMN and P300, specifically in terms of latency, in relation to phoneme perception. In contrast, sex was found to scarcely affect both processes.


Assuntos
Envelhecimento , Potenciais Evocados , Pessoa de Meia-Idade , Idoso , Humanos , Masculino , Feminino , Estimulação Acústica/métodos , Envelhecimento/fisiologia , Potenciais Evocados/fisiologia , Cognição , Percepção , Eletroencefalografia/métodos , Potenciais Evocados Auditivos/fisiologia , Percepção Auditiva/fisiologia
19.
Neurobiol Lang (Camb) ; 4(1): 29-52, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37229141

RESUMO

Partial speech input is often understood to trigger rapid and automatic activation of successively higher-level representations of words, from sound to meaning. Here we show evidence from magnetoencephalography that this type of incremental processing is limited when words are heard in isolation as compared to continuous speech. This suggests a less unified and automatic word recognition process than is often assumed. We present evidence from isolated words that neural effects of phoneme probability, quantified by phoneme surprisal, are significantly stronger than (statistically null) effects of phoneme-by-phoneme lexical uncertainty, quantified by cohort entropy. In contrast, we find robust effects of both cohort entropy and phoneme surprisal during perception of connected speech, with a significant interaction between the contexts. This dissociation rules out models of word recognition in which phoneme surprisal and cohort entropy are common indicators of a uniform process, even though these closely related information-theoretic measures both arise from the probability distribution of wordforms consistent with the input. We propose that phoneme surprisal effects reflect automatic access of a lower level of representation of the auditory input (e.g., wordforms) while the occurrence of cohort entropy effects is task sensitive, driven by a competition process or a higher-level representation that is engaged late (or not at all) during the processing of single words.

20.
Brain Sci ; 13(4)2023 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-37190571

RESUMO

Auditory discrimination, the hearing ability crucial for speech and language development, allowing one to perceive changes in volume, duration and frequency of sounds, was assessed for 366 participants with normal peripheral hearing: 220 participants with auditory processing disorders (APD) and 146 typically developing (TD) children, all aged 6-9 years. Discrimination of speech was tested with nonsense words using the phoneme discrimination test (PDT), while pure tones-with the frequency pattern test (FPT). The obtained results were statistically analyzed and correlated. The median of the FPT results obtained by participants with APD was more than twice lower than those of TD (20% vs. 50%; p < 0.05), similarly in the PDT (21 vs. 24; p < 0.05). The FPT results of 9-year-old APD participants were worse than the results of TD 6-year-olds (30% vs. 40%; p < 0.05), indicating that the significant FPT deficit strongly suggests APD. The process of auditory discrimination development does not complete with the acquisition of phonemes but continues during school age. Physiological phonemes discrimination is not yet equalized among 9-year-olds. Nonsense word tests allow for reliable testing of phoneme discrimination. APD children require testing with PDT and FPT because both test results allow for developing individual therapeutic programs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA