Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 11.821
Filter
1.
J Speech Lang Hear Res ; 67(10): 3691-3713, 2024 Oct 08.
Article in English | MEDLINE | ID: mdl-39366005

ABSTRACT

PURPOSE: For non-autistic children, it is well established that linguistic awareness skills support their success with reading and spelling. Few investigations have examined whether these same linguistic awareness skills play a role in literacy development for autistic elementary school-age children. This study serves as a first step in quantifying the phonological, prosodic, orthographic, and morphological awareness skills of autistic children; how these skills compare to those of non-autistic children; and their relation to literacy performance. METHOD: We measured and compared the phonological, prosodic, orthographic, and morphological awareness skills of 18 autistic (with average nonverbal IQs) and 18 non-autistic elementary school-age children, matched in age, nonverbal IQ, and real-word reading. The relations between linguistic awareness and the children's word-level literacy and reading comprehension skills were examined, and we explored whether the magnitude of these relations was different for the two groups. Regression analyses indicated the relative contribution of linguistic awareness variables to performance on the literacy measures for the autistic children. RESULTS: The non-autistic children outperformed the autistic children on most linguistic awareness measures. There were moderate-to-strong relations between performances on the linguistic awareness and literacy measures for the non-autistic children, and most associations were not reliably different from those for the autistic children. Regression analyses indicate that the performance on specific linguistic awareness variables explains unique variance in autistic children's literacy performance. CONCLUSION: Although less developed than those of their non-autistic peers, the linguistic awareness skills of autistic elementary school-age children are important for successful reading and spelling.


Subject(s)
Autistic Disorder , Awareness , Linguistics , Literacy , Reading , Humans , Child , Male , Female , Autistic Disorder/psychology , Language Tests , Writing , Phonetics , Comprehension
2.
Sci Justice ; 64(5): 485-497, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39277331

ABSTRACT

Verifying the speaker of a speech fragment can be crucial in attributing a crime to a suspect. The question can be addressed given disputed and reference speech material, adopting the recommended and scientifically accepted likelihood ratio framework for reporting evidential strength in court. In forensic practice, usually, auditory and acoustic analyses are performed to carry out such a verification task considering a diversity of features, such as language competence, pronunciation, or other linguistic features. Automated speaker comparison systems can also be used alongside those manual analyses. State-of-the-art automatic speaker comparison systems are based on deep neural networks that take acoustic features as input. Additional information, though, may be obtained from linguistic analysis. In this paper, we aim to answer if, when and how modern acoustic-based systems can be complemented by an authorship technique based on frequent words, within the likelihood ratio framework. We consider three different approaches to derive a combined likelihood ratio: using a support vector machine algorithm, fitting bivariate normal distributions, and passing the score of the acoustic system as additional input to the frequent-word analysis. We apply our method to the forensically relevant dataset FRIDA and the FISHER corpus, and we explore under which conditions fusion is valuable. We evaluate our results in terms of log likelihood ratio cost (Cllr) and equal error rate (EER). We show that fusion can be beneficial, especially in the case of intercepted phone calls with noise in the background.


Subject(s)
Forensic Sciences , Humans , Forensic Sciences/methods , Likelihood Functions , Linguistics , Support Vector Machine , Speech Acoustics , Algorithms , Speech
3.
J Child Lang ; 51(4): 800-833, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39324774

ABSTRACT

While there are always differences in children's input, it is unclear how often these differences impact language development - that is, are developmentally meaningful - and why they do (or do not) do so. We describe a new approach using computational cognitive modeling that links children's input to predicted language development outcomes, and can identify if input differences are potentially developmentally meaningful. We use this approach to investigate if there is developmentally-meaningful input variation across socio-economic status (SES) with respect to the complex syntactic knowledge called syntactic islands. We focus on four island types with available data about the target linguistic behavior. Despite several measurable input differences for syntactic island input across SES, our model predicts this variation not to be developmentally meaningful: it predicts no differences in the syntactic island knowledge that can be learned from that input. We discuss implications for language development variability across SES.


Subject(s)
Child Language , Language Development , Humans , Child, Preschool , Social Class , Linguistics , Cognition , Female , Child , Computer Simulation , Male , Infant
4.
Neuron ; 112(18): 2996-2998, 2024 Sep 25.
Article in English | MEDLINE | ID: mdl-39326388

ABSTRACT

In this issue of Neuron, Zada et al.1 examine how linguistic information flows from a speaker's brain to a listener's brain during face-to-face spontaneous conversation. The authors use intracranial recordings from five pairs of epilepsy patients and neural network language models to establish the existence of an abstract, linguistic space that is shared during conversation.


Subject(s)
Linguistics , Humans , Brain/physiology , Language , Epilepsy/physiopathology
5.
Nurs Open ; 11(9): e70017, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39279598

ABSTRACT

AIM: To translate the Empowerment Scale for Pregnant Women (ESPW) into Chinese and to assess its linguistic validity. METHODS: The integrative method of the translation process, the Delphi technique, and cognitive interviews were used to implement cross-cultural adaptation and enhance comprehensibility and linguistic validation. This study recruited 14 experts in the expert review and cognitively reviewed 15 pregnant women. RESULTS: The two-round Delphi method created agreement on cultural applicability. The results of content validity achieved good levels: The item-level content validity index (CVI) ranged from 0.78 to 1.00, and the scale-level content validity index, calculated using two different formulas, were 0.97 and 0.81, respectively. Kappa values ranged from 0.74 to 1.00. Pregnant women could understand most of the items and response options in the cognitive interview. The revisions to the wording were made based on suggestions from experts and pregnant women. CONCLUSION: The prefinal simplified Chinese ESPW was semantically and conceptually equivalent to the English version, which was well prepared for further psychometric tests in the next stage of cross-cultural adaptation. PATIENT OR PUBLIC CONTRIBUTION: This comprehensive method successfully developed a Chinese tool to measure the empowerment of pregnant women, indicating the international applicability of this tool and the methodological scientific nature. The simplified Chinese ESPW has the potential to support the identification of empowerment levels of pregnant women and the evaluation of the effectiveness of health education and promotion programmes.


Subject(s)
Empowerment , Pregnant Women , Psychometrics , Humans , Female , Pregnancy , China , Pregnant Women/psychology , Pregnant Women/ethnology , Adult , Psychometrics/instrumentation , Psychometrics/methods , Surveys and Questionnaires , Reproducibility of Results , Cross-Cultural Comparison , Delphi Technique , Translations , Translating , Linguistics
6.
J Speech Lang Hear Res ; 67(9): 3232-3254, 2024 Sep 12.
Article in English | MEDLINE | ID: mdl-39265153

ABSTRACT

PURPOSE: The purpose of this study was to determine if there are age-related differences in semantic processing with linguistic and nonlinguistic masking, as measured by the N400. METHOD: Sixteen young (19-31 years) and 16 middle-aged (41-57 years) adults with relatively normal hearing sensitivity were asked to determine whether word pairs were semantically related or unrelated in three listening conditions: quiet, forward, and reverse two-talker speech competition at 0 dB SNR. Behavioral data (accuracies and reaction times) and auditory event-related potential data (N400 amplitudes and latencies) were analyzed using separate mixed design multivariate analysis of variances. RESULTS: Mean N400 amplitudes for semantically related word pairs were similar between young and middle-aged adults. Although neither group showed N400 amplitude differences between masker types, N400 amplitude was significantly greater in the presence of linguistic and nonlinguistic masking than in quiet. In contrast, mean N400 amplitudes for semantically unrelated words were significantly more negative for young adults and not significantly different among listening conditions. CONCLUSIONS: Our findings illustrated age-related differences during a semantic processing task, as indexed by the N400, that may not be evident in suprathreshold speech repetition/recognition tasks or behavioral data. Additionally, N400 amplitudes indicated that linguistic masking effects were equivalent to nonlinguistic masking effects on semantic processing.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Perceptual Masking , Semantics , Speech Perception , Humans , Adult , Young Adult , Male , Female , Perceptual Masking/physiology , Speech Perception/physiology , Middle Aged , Evoked Potentials, Auditory/physiology , Reaction Time/physiology , Age Factors , Aging/physiology , Aging/psychology , Linguistics , Evoked Potentials/physiology
7.
Philos Trans R Soc Lond B Biol Sci ; 379(1913): 20230412, 2024 Nov 04.
Article in English | MEDLINE | ID: mdl-39278240

ABSTRACT

One apparent feature of mental time travel is the ability to recursively embed temporal perspectives across different times: humans can remember how we anticipated the future and anticipate how we will remember the past. This recursive structure of mental time travel might be formalized in terms of a 'grammar' that is reflective of but more general than linguistic notions of absolute and relative tense. Here, I provide a foundation for this grammatical framework, emphasizing a bounded (rather than unbounded) recursive function that supports mental time travel to a limited temporal depth and to actual and possible scenarios. Anticipated counterfactual thinking, for instance, entails three levels of mental time travel to a possible scenario ('in the future, I will reflect on how my past self could have taken a different future action') and is centrally implicated in complex human decision-making. This perspective calls for further research into the mechanisms, ontogeny, functions and phylogeny of recursive mental time travel, and revives the question of links with other recursive forms of thinking such as theory of mind. This article is part of the theme issue 'Elements of episodic memory: lessons from 40 years of research'.


Subject(s)
Memory, Episodic , Humans , Decision Making , Linguistics/history , Linguistics/methods , Thinking/physiology , Time Perception/physiology , History, 20th Century , History, 21st Century , Cognitive Science/history , Cognitive Science/methods
8.
PLoS One ; 19(9): e0309900, 2024.
Article in English | MEDLINE | ID: mdl-39240959

ABSTRACT

The model of bipolar complex fuzzy linguistic set is a very famous and dominant principle to cope with vague and uncertain information. The bipolar complex fuzzy linguistic set contained the positive membership function, negative membership function, and linguistic variable, where the technique of fuzzy sets to bipolar fuzzy sets are the special cases of the bipolar complex fuzzy linguistic set. In this manuscript, we describe the model of Aczel-Alsina operational laws for bipolar complex fuzzy linguistic values based on Aczel-Alsina t-norm and Aczel-Alsina t-conorm. Additionally, we compute the Aczel-Alsina power aggregation operators based on bipolar complex fuzzy linguistic data, called bipolar complex fuzzy linguistic Aczel-Alsina power averaging operator, bipolar complex fuzzy linguistic Aczel-Alsina power weighted averaging operator, bipolar complex fuzzy linguistic Aczel-Alsina power geometric operator, and bipolar complex fuzzy linguistic Aczel-Alsina power weighted geometric operator with some dominant and fundamental laws such as idempotency, monotonicity, and boundedness. Moreover, we initiate the model of the Weighted Aggregates Sum Product Assessment technique with the help of consequent theory. In the context of geographic information systems and spatial information systems, coupling aims to find out the relationships among different components within a geographic information system, where coupling can occur at many stages, for instance, spatial coupling, data coupling, and functional coupling. To evaluate the above dilemma, we perform the model of multi-attribute decision-making for invented operators to compute the best technique for addressing geographic information systems. In the last, we deliberate some numerical examples for comparing the ranking results of proposed and prevailing techniques.


Subject(s)
Fuzzy Logic , Geographic Information Systems , Linguistics , Models, Theoretical , Algorithms , Humans
9.
BMJ ; 386: q2074, 2024 09 20.
Article in English | MEDLINE | ID: mdl-39304306
10.
Sci Rep ; 14(1): 22605, 2024 09 30.
Article in English | MEDLINE | ID: mdl-39349677

ABSTRACT

Cognitively reappraising a stressful experience-reinterpreting the situation to blunt its emotional impact-is effective for regulating negative emotions. English speakers have been shown to engage in linguistic distancing when reappraising, spontaneously using words that are more abstract or impersonal. Across two preregistered studies (N = 299), we investigated whether such shifts in language use generalize to Spanish, a language proposed to offer unique tools for expressing psychological distance. Bilingual speakers of Spanish and English and a comparison group of English monolinguals transcribed their thoughts in each of their languages while responding naturally to negative images or reappraising them. Reappraisal shifted markers of psychological distance common to both languages (e.g., reduced use of "I"/"yo"), as well as Spanish-specific markers (e.g., greater use of "estar": "to be" for temporary states). Whether these linguistic shifts reflected successful emotion regulation depended on language experience: in exploratory analyses, the common markers were more strongly linked to reduced negative affect for late than early Spanish learners, and one Spanish-specific marker ("estar") also predicted reduced negative affect for early learners. Our findings suggest that people distance their language in both cross-linguistically shared and language-specific ways when regulating their emotions.


Subject(s)
Emotional Regulation , Language , Linguistics , Multilingualism , Humans , Female , Male , Emotional Regulation/physiology , Linguistics/methods , Adult , Emotions/physiology , Young Adult , Adolescent
11.
Trends Ecol Evol ; 39(10): 881-884, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39232857

ABSTRACT

Language connects cultural and biological diversity and can contribute to both big data and localised approaches to improve conservation. Analysing Indigenous languages at regional level supports understanding of local ecologies and cultural revitalisation. Collated linguistic datasets can help to identify large-scale patterns, including extinctions, and forge robust multidisciplinary approaches to biocultural decision-making.


Subject(s)
Conservation of Natural Resources , Language , Humans , Indigenous Peoples , Linguistics , Biodiversity
12.
Cogn Sci ; 48(9): e13497, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39283250

ABSTRACT

While a large body of work in sentence comprehension has explored how different types of linguistic information are used to guide syntactic parsing, less is known about the effect of discourse structure. This study investigates this question, focusing on the main and subordinate discourse contrast manifested in the distinction between restrictive relative clauses (RRCs) and appositive relative clauses (ARCs) in American English. In three self-paced reading experiments, we examined whether both RRCs and ARCs interfere with the matrix clause content and give rise to the agreement attraction effect. While the standard attraction effect was consistently observed in the baseline RRC structures, the effect varied in the ARC structures. These results collectively suggest that discourse structure indeed constrains syntactic dependency resolution. Most importantly, we argue that what is at stake is not the static discourse structure properties at the global sentence level. Instead, attention should be given to the incremental update of the discourse structure in terms of which discourse questions are active at any given moment of a discourse. The current findings have implications for understanding the way discourse structure, specifically the active state of discourse questions, constrains memory retrieval.


Subject(s)
Comprehension , Language , Reading , Humans , Linguistics , Psycholinguistics , Female , Male , Adult
13.
Neuron ; 112(18): 3211-3222.e5, 2024 Sep 25.
Article in English | MEDLINE | ID: mdl-39096896

ABSTRACT

Effective communication hinges on a mutual understanding of word meaning in different contexts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We developed a model-based coupling framework that aligns brain activity in both speaker and listener to a shared embedding space from a large language model (LLM). The context-sensitive LLM embeddings allow us to track the exchange of linguistic information, word by word, from one brain to another in natural conversations. Linguistic content emerges in the speaker's brain before word articulation and rapidly re-emerges in the listener's brain after word articulation. The contextual embeddings better capture word-by-word neural alignment between speaker and listener than syntactic and articulatory models. Our findings indicate that the contextual embeddings learned by LLMs can serve as an explicit numerical model of the shared, context-rich meaning space humans use to communicate their thoughts to one another.


Subject(s)
Brain , Electrocorticography , Humans , Brain/physiology , Male , Female , Linguistics , Epilepsy/physiopathology , Adult , Communication , Language , Models, Neurological , Thinking/physiology
14.
J Speech Lang Hear Res ; 67(9): 3081-3093, 2024 Sep 12.
Article in English | MEDLINE | ID: mdl-39110814

ABSTRACT

PURPOSE: Our goal was to compare statistical learning abilities between preschoolers with developmental language disorder (DLD) and peers with typical development (TD) by assessing their learning of two artificial grammars. METHOD: Four- and 5-year-olds with and without DLD were compared on their statistical learning ability using two artificial grammars. After learning an aX grammar, participants learned a relatively more complex abX grammar with a nonadjacent relationship between a and X. Participants were tested on their generalization of the grammatical pattern to new sequences with novel X elements that conformed to (aX, abX) or violated (Xa, baX) the grammars. RESULTS: Results revealed an interaction between age and language group. Four-year-olds with and without DLD performed equivalently on the aX and abX grammar tests, and neither of the 4-year-old groups' accuracy scores exceeded chance. In contrast, among 5-year-olds, TD participants scored significantly higher on aX tests compared to participants with DLD, but the groups' abX scores did not differ. Five-year-old participants with DLD did not exceed chance on any test, whereas 5-year-old TD participants' scores exceeded chance on all grammar learning outcomes. Regression analyses indicated that aX performance positively predicted learning outcomes on the subsequent abX grammar for TD participants. CONCLUSION: These results indicate that preschool-age participants with DLD show deficits relative to typical peers in statistical learning, but group differences vary with participant age and type of grammatical structure being tested. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.26487376.


Subject(s)
Child Language , Language Development Disorders , Language Tests , Humans , Child, Preschool , Language Development Disorders/psychology , Female , Male , Age Factors , Learning , Linguistics
15.
J Speech Lang Hear Res ; 67(9): 3133-3147, 2024 Sep 12.
Article in English | MEDLINE | ID: mdl-39196847

ABSTRACT

PURPOSE: Persons with aphasia (PWA) experience differences in attention after stroke, potentially impacting cognitive/language performance. This secondary analysis investigated physiologically measured vigilant attention during linguistic and nonlinguistic processing in PWA and control participants. METHOD: To evaluate performance and attention in a language task, seven PWA read sentences aloud (linguistic task) and were compared to a previous data set of 10 controls and 10 PWA. To evaluate performance and attention in a language-independent task, 11 controls and nine PWA completed the Bivalent Shape Task (nonlinguistic task). Continuous electroencephalogram (EEG) data were collected during each session. A previously validated EEG algorithm classified vigilant-attention state for each experiment trial into high, moderate, distracted, or no attention. Dependent measures were task accuracy and amount of time spent in each attention state (measured by the number of trials). RESULTS: PWA produced significantly more errors than controls on the linguistic task, but groups performed similarly on the nonlinguistic task. During the linguistic task, controls spent significantly more time than PWA in a moderate-attention state, but no statistically significant differences were found between groups for other attention states. For the nonlinguistic task, amount of time controls and PWA spent in each attention state was more evenly distributed. When directly comparing attention patterns between linguistic and nonlinguistic tasks, PWA showed significantly more time in a high-attention state during the linguistic task as compared to the nonlinguistic task; however, controls showed no significant differences between linguistic and nonlinguistic tasks. CONCLUSIONS: This study provides new evidence that PWA experience a heightened state of vigilant attention when language processing demands are higher (during a linguistic task) than when language demands are lower (during a nonlinguistic task). Collectively, results of this study suggest that when processing language, PWA may allocate more attentional resources than when completing other kinds of cognitive tasks.


Subject(s)
Aphasia , Attention , Cognition , Electroencephalography , Humans , Aphasia/psychology , Aphasia/physiopathology , Aphasia/etiology , Attention/physiology , Female , Male , Middle Aged , Aged , Language , Adult , Linguistics
16.
Biosystems ; 244: 105297, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39154841

ABSTRACT

Symbolic systems (SSs) are uniquely products of living systems, such that symbolism and life may be inextricably intertwined phenomena. Within a given SS, there is a range of symbol complexity over which signaling is functionally optimized. This range exists relative to a complex and potentially infinitely large background of latent, unused symbol space. Understanding how symbol sets sample this latent space is relevant to diverse fields including biochemistry and linguistics. We quantitatively explored the graphic complexity of two biosemiotic systems: genetically encoded amino acids (GEAAs) and written language. Molecular and graphical notions of complexity are highly correlated for GEAAs and written language. Symbol sets are generally neither minimally nor maximally complex relative to their latent spaces, but exist across an objectively definable distribution, with the GEAAs having especially low complexity. The selection pressures guiding these disparate systems are explicable by symbol production and disambiguation efficiency. These selection pressures may be universal, offer a quantifiable metric for comparison, and suggest that all life in the Universe may discover optimal symbol set complexity distributions with respect to their latent spaces. If so, the "complexity" of individual components of SSs may not be as strong a biomarker as symbol set complexity distribution.


Subject(s)
Amino Acids , Amino Acids/genetics , Amino Acids/metabolism , Symbolism , Humans , Language , Writing , Linguistics
17.
J Alzheimers Dis ; 100(s1): S25-S43, 2024.
Article in English | MEDLINE | ID: mdl-39121121

ABSTRACT

Background: The assessment of language deficits can be valuable in the early clinical diagnosis of neurodegenerative disorders, including Alzheimer's disease (AD). Objective: The present study aims to explore whether language markers at the macrostructural level could assist with the placement of an individual across the dementia continuum employing production data from structured narratives. Methods: We administered a Picture Sequence Narrative Discourse Task to 170 speakers of Greek: young healthy controls (yHC), cognitively intact healthy elders (eHC), elder participants with subjective cognitive impairment (SCI), with mild cognitive impairment (MCI), and with AD dementia at the mild/moderate stages. Structural MRIs, medical history, neurological examination, and neuropsychological/cognitive screening determined the status of each speaker to appropriately groupthem. Results: The data analysis revealed that the Macrostructure Index, Irrelevant Info, and Narration Density markers can track cognitive decline and AD (p < 0.001; Macrostructural Index: eHC versus AD Sensitivity 93.8%, Specificity 74.4%, MCI versus AD Sensitivity 93.8%, Specificity 66.7%; Narration Density: eHC versus AD Sensitivity 90.6%, Specificity 71.8%, MCI versus AD Sensitivity 93.8%, Specificity 66.7%). Moreover, Narrative Complexity was significantly affected for subjects with AD, Irrelevant Info increased in the narrations of speakers with MCI and AD, while Narration Length did not appear to indubitably differentiate between the cognitively intact groups and the clinical ones. Conclusions: Narrative Macrostructure Indices provide valuable information on the language profile of speakers with(out) intact cognition revealing subtle early signs of cognitive decline and AD suggesting that the inclusion of language-based assessment tools would facilitate the clinical process.


Subject(s)
Alzheimer Disease , Cognitive Dysfunction , Narration , Neuropsychological Tests , Humans , Alzheimer Disease/diagnostic imaging , Alzheimer Disease/pathology , Alzheimer Disease/psychology , Male , Female , Aged , Cognitive Dysfunction/diagnostic imaging , Cognitive Dysfunction/diagnosis , Greece , Magnetic Resonance Imaging , Middle Aged , Language , Linguistics , Aged, 80 and over , Language Tests , Language Disorders/etiology
18.
Sensors (Basel) ; 24(15)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39123907

ABSTRACT

Skeleton-based action recognition, renowned for its computational efficiency and indifference to lighting variations, has become a focal point in the realm of motion analysis. However, most current methods typically only extract global skeleton features, overlooking the potential semantic relationships among various partial limb motions. For instance, the subtle differences between actions such as "brush teeth" and "brush hair" are mainly distinguished by specific elements. Although combining limb movements provides a more holistic representation of an action, relying solely on skeleton points proves inadequate for capturing these nuances. Therefore, integrating detailed linguistic descriptions into the learning process of skeleton features is essential. This motivates us to explore integrating fine-grained language descriptions into the learning process of skeleton features to capture more discriminative skeleton behavior representations. To this end, we introduce a new Linguistic-Driven Partial Semantic Relevance Learning framework (LPSR) in this work. While using state-of-the-art large language models to generate linguistic descriptions of local limb motions and further constrain the learning of local motions, we also aggregate global skeleton point representations and textual representations (which generated from an LLM) to obtain a more generalized cross-modal behavioral representation. On this basis, we propose a cyclic attentional interaction module to model the implicit correlations between partial limb motions. Numerous ablation experiments demonstrate the effectiveness of the method proposed in this paper, and our method also obtains state-of-the-art results.


Subject(s)
Semantics , Humans , Linguistics , Movement/physiology , Pattern Recognition, Automated/methods , Algorithms , Learning/physiology
19.
Sci Rep ; 14(1): 19105, 2024 08 17.
Article in English | MEDLINE | ID: mdl-39154048

ABSTRACT

The multivariate temporal response function (mTRF) is an effective tool for investigating the neural encoding of acoustic and complex linguistic features in natural continuous speech. In this study, we investigated how neural representations of speech features derived from natural stimuli are related to early signs of cognitive decline in older adults, taking into account the effects of hearing. Participants without ( n = 25 ) and with ( n = 19 ) early signs of cognitive decline listened to an audiobook while their electroencephalography responses were recorded. Using the mTRF framework, we modeled the relationship between speech input and neural response via different acoustic, segmented and linguistic encoding models and examined the response functions in terms of encoding accuracy, signal power, peak amplitudes and latencies. Our results showed no significant effect of cognitive decline or hearing ability on the neural encoding of acoustic and linguistic speech features. However, we found a significant interaction between hearing ability and the word-level segmentation model, suggesting that hearing impairment specifically affects encoding accuracy for this model, while other features were not affected by hearing ability. These results suggest that while speech processing markers remain unaffected by cognitive decline and hearing loss per se, neural encoding of word-level segmented speech features in older adults is affected by hearing loss but not by cognitive decline. This study emphasises the effectiveness of mTRF analysis in studying the neural encoding of speech and argues for an extension of research to investigate its clinical impact on hearing loss and cognition.


Subject(s)
Cognitive Dysfunction , Electroencephalography , Hearing Loss , Speech Perception , Humans , Male , Female , Aged , Cognitive Dysfunction/physiopathology , Hearing Loss/physiopathology , Speech Perception/physiology , Speech/physiology , Middle Aged , Cues , Linguistics , Acoustic Stimulation , Aged, 80 and over
20.
Sci Rep ; 14(1): 18922, 2024 08 14.
Article in English | MEDLINE | ID: mdl-39143297

ABSTRACT

When a person listens to natural speech, the relation between features of the speech signal and the corresponding evoked electroencephalogram (EEG) is indicative of neural processing of the speech signal. Using linguistic representations of speech, we investigate the differences in neural processing between speech in a native and foreign language that is not understood. We conducted experiments using three stimuli: a comprehensible language, an incomprehensible language, and randomly shuffled words from a comprehensible language, while recording the EEG signal of native Dutch-speaking participants. We modeled the neural tracking of linguistic features of the speech signals using a deep-learning model in a match-mismatch task that relates EEG signals to speech, while accounting for lexical segmentation features reflecting acoustic processing. The deep learning model effectively classifies coherent versus nonsense languages. We also observed significant differences in tracking patterns between comprehensible and incomprehensible speech stimuli within the same language. It demonstrates the potential of deep learning frameworks in measuring speech understanding objectively.


Subject(s)
Electroencephalography , Language , Speech Perception , Humans , Speech Perception/physiology , Electroencephalography/methods , Female , Male , Adult , Young Adult , Deep Learning , Speech/physiology , Linguistics
SELECTION OF CITATIONS
SEARCH DETAIL