Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
J Integr Neurosci ; 23(7): 139, 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39082290

RESUMO

BACKGROUNDS: Segments and tone are important sub-syllabic units that play large roles in lexical processing in tonal languages. However, their roles in lexical processing remain unclear, and the event-related potential (ERP) technique will benefit the exploration of the cognitive mechanism in lexical processing. METHODS: The high temporal resolution of ERP enables the technique to interpret rapidly changing spoken language performances. The present ERP study examined the different roles of segments and tone in Mandarin Chinese lexical processing. An auditory priming experiment was designed that included five types of priming stimuli: consonant mismatch, vowel mismatch, tone mismatch, unrelated mismatch, and identity. Participants were asked to judge whether the target of the prime-target pair was a real Mandarin disyllabic word or not. RESULTS: Behavioral results including reaction time and response accuracy and ERP results were collected. Results were different from those of previous studies that showed the dominant role of consonants in lexical access in mainly non-tonal languages like English. Our results showed that consonants and vowels play comparable roles, whereas tone plays a less important role than do consonants and vowels in lexical processing in Mandarin. CONCLUSIONS: These results have implications for understanding the brain mechanisms in lexical processing of tonal languages.


Assuntos
Eletroencefalografia , Potenciais Evocados , Percepção da Fala , Humanos , Masculino , Feminino , Adulto Jovem , Percepção da Fala/fisiologia , Adulto , Potenciais Evocados/fisiologia , Tempo de Reação/fisiologia , Encéfalo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Psicolinguística , Idioma
2.
Dyslexia ; 21(2): 97-122, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25820191

RESUMO

It is now well established that anticipation of upcoming input is a key characteristic of spoken language comprehension. It has also frequently been observed that literacy influences spoken language processing. Here, we investigated whether anticipatory spoken language processing is related to individuals' word reading abilities. Dutch adults with dyslexia and a control group participated in two eye-tracking experiments. Experiment 1 was conducted to assess whether adults with dyslexia show the typical language-mediated eye gaze patterns. Eye movements of both adults with and without dyslexia closely replicated earlier research: spoken language is used to direct attention to relevant objects in the environment in a closely time-locked manner. In Experiment 2, participants received instructions (e.g., 'Kijk naar de(COM) afgebeelde piano(COM)', look at the displayed piano) while viewing four objects. Articles (Dutch 'het' or 'de') were gender marked such that the article agreed in gender only with the target, and thus, participants could use gender information from the article to predict the target object. The adults with dyslexia anticipated the target objects but much later than the controls. Moreover, participants' word reading scores correlated positively with their anticipatory eye movements. We conclude by discussing the mechanisms by which reading abilities may influence predictive language processing.


Assuntos
Antecipação Psicológica , Dislexia/fisiopatologia , Dislexia/psicologia , Movimentos Oculares , Leitura , Percepção da Fala , Adolescente , Adulto , Aptidão , Compreensão , Feminino , Humanos , Masculino , Adulto Jovem
3.
JMIR Cancer ; 10: e43070, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39037754

RESUMO

BACKGROUND: Commonly offered as supportive care, therapist-led online support groups (OSGs) are a cost-effective way to provide support to individuals affected by cancer. One important indicator of a successful OSG session is group cohesion; however, monitoring group cohesion can be challenging due to the lack of nonverbal cues and in-person interactions in text-based OSGs. The Artificial Intelligence-based Co-Facilitator (AICF) was designed to contextually identify therapeutic outcomes from conversations and produce real-time analytics. OBJECTIVE: The aim of this study was to develop a method to train and evaluate AICF's capacity to monitor group cohesion. METHODS: AICF used a text classification approach to extract the mentions of group cohesion within conversations. A sample of data was annotated by human scorers, which was used as the training data to build the classification model. The annotations were further supported by finding contextually similar group cohesion expressions using word embedding models as well. AICF performance was also compared against the natural language processing software Linguistic Inquiry Word Count (LIWC). RESULTS: AICF was trained on 80,000 messages obtained from Cancer Chat Canada. We tested AICF on 34,048 messages. Human experts scored 6797 (20%) of the messages to evaluate the ability of AICF to classify group cohesion. Results showed that machine learning algorithms combined with human input could detect group cohesion, a clinically meaningful indicator of effective OSGs. After retraining with human input, AICF reached an F1-score of 0.82. AICF performed slightly better at identifying group cohesion compared to LIWC. CONCLUSIONS: AICF has the potential to assist therapists by detecting discord in the group amenable to real-time intervention. Overall, AICF presents a unique opportunity to strengthen patient-centered care in web-based settings by attending to individual needs. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/21453.

4.
Front Hum Neurosci ; 17: 1079493, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36742356

RESUMO

Negation is frequently used in natural language, yet relatively little is known about its processing. More importantly, what is known regarding the neurophysiological processing of negation is mostly based on results of studies using written stimuli (the word-by-word paradigm). While the results of these studies have suggested processing costs in connection to negation (increased negativities in brain responses), it is difficult to know how this translates into processing of spoken language. We therefore developed an auditory paradigm based on a previous visual study investigating processing of affirmatives, sentential negation (not), and prefixal negation (un-). The findings of processing costs were replicated but differed in the details. Importantly, the pattern of ERP effects suggested less effortful processing for auditorily presented negated forms (restricted to increased anterior and posterior positivities) in comparison to visually presented negated forms. We suggest that the natural flow of spoken language reduces variability in processing and therefore results in clearer ERP patterns.

5.
Brain Sci ; 13(3)2023 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-36979322

RESUMO

Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes-those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals' transcription of missing phonemes often defaulted to '/d/t/th/', the same phonemes often experienced during the McGurk illusion. Importantly, individuals' default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.

6.
Brain Sci ; 13(7)2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37508940

RESUMO

Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of prior information to trainees on each trial during training. The training was for four ~20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.

7.
Front Neurosci ; 16: 915744, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35942153

RESUMO

Spoken language comprehension requires rapid and continuous integration of information, from lower-level acoustic to higher-level linguistic features. Much of this processing occurs in the cerebral cortex. Its neural activity exhibits, for instance, correlates of predictive processing, emerging at delays of a few 100 ms. However, the auditory pathways are also characterized by extensive feedback loops from higher-level cortical areas to lower-level ones as well as to subcortical structures. Early neural activity can therefore be influenced by higher-level cognitive processes, but it remains unclear whether such feedback contributes to linguistic processing. Here, we investigated early speech-evoked neural activity that emerges at the fundamental frequency. We analyzed EEG recordings obtained when subjects listened to a story read by a single speaker. We identified a response tracking the speaker's fundamental frequency that occurred at a delay of 11 ms, while another response elicited by the high-frequency modulation of the envelope of higher harmonics exhibited a larger magnitude and longer latency of about 18 ms with an additional significant component at around 40 ms. Notably, while the earlier components of the response likely originate from the subcortical structures, the latter presumably involves contributions from cortical regions. Subsequently, we determined the magnitude of these early neural responses for each individual word in the story. We then quantified the context-independent frequency of each word and used a language model to compute context-dependent word surprisal and precision. The word surprisal represented how predictable a word is, given the previous context, and the word precision reflected the confidence about predicting the next word from the past context. We found that the word-level neural responses at the fundamental frequency were predominantly influenced by the acoustic features: the average fundamental frequency and its variability. Amongst the linguistic features, only context-independent word frequency showed a weak but significant modulation of the neural response to the high-frequency envelope modulation. Our results show that the early neural response at the fundamental frequency is already influenced by acoustic as well as linguistic information, suggesting top-down modulation of this neural response.

8.
JMIR Serious Games ; 10(3): e32297, 2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35900825

RESUMO

BACKGROUND: The number of serious games for cognitive training in aging (SGCTAs) is proliferating in the market and attempting to combat one of the most feared aspects of aging-cognitive decline. However, the efficacy of many SGCTAs is still questionable. Even the measures used to validate SGCTAs are up for debate, with most studies using cognitive measures that gauge improvement in trained tasks, also known as near transfer. This study takes a different approach, testing the efficacy of the SGCTA-Effectivate-in generating tangible far-transfer improvements in a nontrained task-the Eye tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL)-which tests speech processing in adverse conditions. OBJECTIVE: This study aimed to validate the use of a real-time measure of speech processing as a gauge of the far-transfer efficacy of an SGCTA designed to train executive functions. METHODS: In a randomized controlled trial that included 40 participants, we tested 20 (50%) older adults before and after self-administering the SGCTA Effectivate training and compared their performance with that of the control group of 20 (50%) older adults. The E-WINDMIL eye-tracking task was administered to all participants by blinded experimenters in 2 sessions separated by 2 to 8 weeks. RESULTS: Specifically, we tested the change between sessions in the efficiency of segregating the spoken target word from its sound-sharing alternative, as the word unfolds in time. We found that training with the SGCTA Effectivate improved both early and late speech processing in adverse conditions, with higher discrimination scores in the training group than in the control group (early processing: F1,38=7.371; P=.01; ηp2=0.162 and late processing: F1,38=9.003; P=.005; ηp2=0.192). CONCLUSIONS: This study found the E-WINDMIL measure of speech processing to be a valid gauge for the far-transfer effects of executive function training. As the SGCTA Effectivate does not train any auditory task or language processing, our results provide preliminary support for the ability of Effectivate to create a generalized cognitive improvement. Given the crucial role of speech processing in healthy and successful aging, we encourage researchers and developers to use speech processing measures, the E-WINDMIL in particular, to gauge the efficacy of SGCTAs. We advocate for increased industry-wide adoption of far-transfer metrics to gauge SGCTAs.

9.
Neuropsychologia ; 158: 107907, 2021 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-34058175

RESUMO

Language difficulties of children with Developmental Language Disorder (DLD) have been associated with multiple underlying factors and are still poorly understood. One way of investigating the mechanisms of DLD language problems is to compare language-related brain activation patterns of children with DLD to those of a population with similar language difficulties and a uniform etiology. Children with 22q11.2 deletion syndrome (22q11DS) constitute such a population. Here, we conducted an fMRI study, in which children (6-10yo) with DLD and 22q11DS listened to speech alternated with reversed speech. We compared language laterality and language-related brain activation levels with those of typically developing (TD) children who performed the same task. The data revealed no significant differences between groups in language lateralization, but task-related activation levels were lower in children with language impairment than in TD children in several nodes of the language network. We conclude that language impairment in children with DLD and in children with 22q11DS may involve (partially) overlapping cortical areas.


Assuntos
Síndrome de DiGeorge , Transtornos do Desenvolvimento da Linguagem , Encéfalo/diagnóstico por imagem , Criança , Linguagem Infantil , Síndrome de DiGeorge/complicações , Síndrome de DiGeorge/diagnóstico por imagem , Humanos , Transtornos do Desenvolvimento da Linguagem/etiologia , Fala
10.
Front Artif Intell ; 3: 39, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33733156

RESUMO

We present an acoustic distance measure for comparing pronunciations, and apply the measure to assess foreign accent strength in American-English by comparing speech of non-native American-English speakers to a collection of native American-English speakers. An acoustic-only measure is valuable as it does not require the time-consuming and error-prone process of phonetically transcribing speech samples which is necessary for current edit distance-based approaches. We minimize speaker variability in the data set by employing speaker-based cepstral mean and variance normalization, and compute word-based acoustic distances using the dynamic time warping algorithm. Our results indicate a strong correlation of r = -0.71 (p < 0.0001) between the acoustic distances and human judgments of native-likeness provided by more than 1,100 native American-English raters. Therefore, the convenient acoustic measure performs only slightly lower than the state-of-the-art transcription-based performance of r = -0.77. We also report the results of several small experiments which show that the acoustic measure is not only sensitive to segmental differences, but also to intonational differences and durational differences. However, it is not immune to unwanted differences caused by using a different recording device.

11.
Brain Lang ; 184: 32-42, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29960165

RESUMO

Recent work has sought to describe the time-course of spoken word recognition, from initial acoustic cue encoding through lexical activation, and identify cortical areas involved in each stage of analysis. However, existing methods are limited in either temporal or spatial resolution, and as a result, have only provided partial answers to the question of how listeners encode acoustic information in speech. We present data from an experiment using a novel neuroimaging method, fast optical imaging, to directly assess the time-course of speech perception, providing non-invasive measurement of speech sound representations, localized to specific cortical areas. We find that listeners encode speech in terms of continuous acoustic cues at early stages of processing (ca. 96 ms post-stimulus onset), and begin activating phonological category representations rapidly (ca. 144 ms post-stimulus). Moreover, cue-based representations are widespread in the brain and overlap in time with graded category-based representations, suggesting that spoken word recognition involves simultaneous activation of both continuous acoustic cues and phonological categories.


Assuntos
Encéfalo/diagnóstico por imagem , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Encéfalo/fisiologia , Sinais (Psicologia) , Eletroencefalografia , Feminino , Humanos , Masculino , Neuroimagem , Imagem Óptica , Fonética , Adulto Jovem
12.
Biling (Camb Engl) ; 18(3): 490-501, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26146479

RESUMO

Studies measuring inhibitory control in the visual modality have shown a bilingual advantage in both children and adults. However, there is a lack of developmental research on inhibitory control in the auditory modality. This study compared the comprehension of active and passive English sentences in 7-10 years old bilingual and monolingual children. The task was to identify the agent of a sentence in the presence of verbal interference. The target sentence was cued by the gender of the speaker. Children were instructed to focus on the sentence in the target voice and ignore the distractor sentence. Results indicate that bilinguals are more accurate than monolinguals in comprehending syntactically complex sentences in the presence of linguistic noise. This supports previous findings with adult participants (Filippi, Leech, Thomas, Green & Dick, 2012). We therefore conclude that the bilingual advantage in interference control begins early in life and is maintained throughout development.

13.
Front Psychol ; 3: 190, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22715332

RESUMO

During spoken language interpretation, listeners rapidly relate the meaning of each individual word to what has been said before. However, spoken words often contain spurious other words, like day in daisy, or dean in sardine. Do listeners also relate the meaning of such unintended, spurious words to the prior context? We used ERPs to look for transient meaning-based N400 effects in sentences that were completely plausible at the level of words intended by the speaker, but contained an embedded word whose meaning clashed with the context. Although carrier words with an initial embedding (day in daisy) did not elicit an embedding-related N400 effect relative to matched control words without embedding, carrier words with a final embedding (dean in sardine) did elicit such an effect. Together with prior work from our lab and the results of a Shortlist B simulation, our findings suggest that listeners do semantically interpret embedded words, albeit not under all conditions. We explain the latter by assuming that the sense-making system adjusts its hypothesis for how to interpret the external input at every new syllable, in line with recent ideas of active sampling in perception.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa