Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 7.265
Filtrar
1.
J Craniomaxillofac Surg ; 47(12): 1868-1874, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31812310

RESUMO

BACKGROUND: Maxillary advancement may affect speech in cleft patients. AIMS: To evaluate whether the amount of maxillary advancement in Le Fort I osteotomy affects velopharyngeal function (VPF) in cleft patients. METHODS: Ninety-three non-syndromic cleft patients (51 females, 42 males) were evaluated retrospectively. All patients had undergone a Le Fort I or bimaxillary (n = 24) osteotomy at Helsinki Cleft Palate and Craniofacial Center. Preoperative and postoperative lateral cephalometric radiographs were digitized to measure the amount of maxillary advancement. Pre- and postoperative speech was assessed perceptually and instrumentally by experienced speech therapists. Student's t-test and Mann-Whitney's U-test were used in the statistical analyses. Kappa statistics were calculated to assess reliability. RESULTS: The mean advancement of A point was 4.0 mm horizontally (range: -2.8-11.3) and 3.9 mm vertically (range -14.2-3.9). Although there was a negative change in VPF, the amount of maxillary horizontal or vertical movement did not significantly influence the VPF. There was no difference between the patients with maxillary and bimaxillary osteotomy. CONCLUSIONS: The amount of maxillary advancement does not affect the velopharyngeal function in cleft patients.


Assuntos
Fenda Labial/cirurgia , Fissura Palatina/cirurgia , Osteotomia Maxilar/métodos , Procedimentos Cirúrgicos Ortognáticos/métodos , Osteotomia de Le Fort/métodos , Fala/fisiologia , Insuficiência Velofaríngea/fisiopatologia , Adolescente , Adulto , Cefalometria/métodos , Fenda Labial/fisiopatologia , Fissura Palatina/fisiopatologia , Feminino , Finlândia , Humanos , Masculino , Avanço Mandibular/métodos , Maxila/anormalidades , Maxila/cirurgia , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Estudos Retrospectivos , Distúrbios da Fala/fisiopatologia , Distúrbios da Fala/cirurgia , Resultado do Tratamento , Insuficiência Velofaríngea/cirurgia , Adulto Jovem
2.
Sheng Li Xue Bao ; 71(6): 935-945, 2019 Dec 25.
Artigo em Chinês | MEDLINE | ID: mdl-31879748

RESUMO

Speech comprehension is a central cognitive function of the human brain. In cognitive neuroscience, a fundamental question is to understand how neural activity encodes the acoustic properties of a continuous speech stream and resolves multiple levels of linguistic structures at the same time. This paper reviews the recently developed research paradigms that employ electroencephalography (EEG) or magnetoencephalography (MEG) to capture neural tracking of acoustic features or linguistic structures of continuous speech. This review focuses on two questions in speech processing: (1) The encoding of continuously changing acoustic properties of speech; (2) The representation of hierarchical linguistic units, including syllables, words, phrases and sentences. Studies have found that the low-frequency cortical activity tracks the speech envelope. In addition, the cortical activities on different time scales track multiple levels of linguistic units and constitute a representation of hierarchically organized linguistic units. The article reviewed these studies, which provided new insights into the processes of continuous speech in the human brain.


Assuntos
Eletroencefalografia , Magnetoencefalografia , Fala , Estimulação Acústica , Humanos , Fala/fisiologia , Percepção da Fala
3.
BMC Neurol ; 19(1): 241, 2019 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-31629403

RESUMO

BACKGROUND: Amyotrophic lateral sclerosis (ALS) is a fatal degenerative disease of a rapid course. In 25% of ALS sufferers, speech disorders occur as prodromal symptoms of the disease. Impaired communication affects physical health and has a negative impact on mental and emotional condition. In this study, we assessed which domains of speech are particularly affected in ALS. Subsequently, we estimated possible correlations between the ALS patients' subjective perception of their speech quality and an objective assessment of the speech organs carried out by an expert. METHODS: The study group consisted of 63 patients with sporadic ALS. The patients were examined for articulatory functions by means of Voice Handicap Index (VHI) and the Frenchay Dysarthria Assessment (FDA). RESULTS: On the basis of the VHI scores, the entire cohort was divided into 2 groups: group I (40 subjects) with mild speech impairment, and group II (23 subjects) displaying moderate and profound speech deficits. In an early phase of ALS, changes were typically reported in the tongue, lips and soft palate. The FDA and VHI-based measurements revealed a high, positive correlation between the objective and subjective evaluation of articulation quality. CONCLUSIONS: Deterioration of the articulatory organs resulted in the reduction of social, physical and emotional functioning. The highly positive correlation between the VHI and FDA scales seems to indicate that the VHI questionnaire may be a reliable, self-contained tool for monitoring the course and progression of speech disorders in ALS. TRIAL REGISTRATION: NCT02193893 .


Assuntos
Esclerose Amiotrófica Lateral/complicações , Distúrbios da Fala/diagnóstico , Distúrbios da Fala/etiologia , Patologia da Fala e Linguagem/métodos , Adulto , Idoso , Progressão da Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fala/fisiologia
4.
PLoS Comput Biol ; 15(9): e1007091, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31525179

RESUMO

A fundamental challenge in neuroscience is to understand what structure in the world is represented in spatially distributed patterns of neural activity from multiple single-trial measurements. This is often accomplished by learning a simple, linear transformations between neural features and features of the sensory stimuli or motor task. While successful in some early sensory processing areas, linear mappings are unlikely to be ideal tools for elucidating nonlinear, hierarchical representations of higher-order brain areas during complex tasks, such as the production of speech by humans. Here, we apply deep networks to predict produced speech syllables from a dataset of high gamma cortical surface electric potentials recorded from human sensorimotor cortex. We find that deep networks had higher decoding prediction accuracy compared to baseline models. Having established that deep networks extract more task relevant information from neural data sets relative to linear models (i.e., higher predictive accuracy), we next sought to demonstrate their utility as a data analysis tool for neuroscience. We first show that deep network's confusions revealed hierarchical latent structure in the neural data, which recapitulated the underlying articulatory nature of speech motor control. We next broadened the frequency features beyond high-gamma and identified a novel high-gamma-to-beta coupling during speech production. Finally, we used deep networks to compare task-relevant information in different neural frequency bands, and found that the high-gamma band contains the vast majority of information relevant for the speech prediction task, with little-to-no additional contribution from lower-frequency amplitudes. Together, these results demonstrate the utility of deep networks as a data analysis tool for basic and applied neuroscience.


Assuntos
Biologia Computacional/métodos , Aprendizado Profundo , Córtex Sensório-Motor/fisiologia , Fala/fisiologia , Eletrocorticografia , Humanos , Processamento de Sinais Assistido por Computador
5.
PLoS Comput Biol ; 15(9): e1007321, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31479444

RESUMO

We present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. FACTS employs a hierarchical state feedback control architecture to control simulated vocal tract and produce intelligible speech. The model includes higher-level control of speech tasks and lower-level control of speech articulators. The task controller is modeled as a dynamical system governing the creation of desired constrictions in the vocal tract, after Task Dynamics. Both the task and articulatory controllers rely on an internal estimate of the current state of the vocal tract to generate motor commands. This estimate is derived, based on efference copy of applied controls, from a forward model that predicts both the next vocal tract state as well as expected auditory and somatosensory feedback. A comparison between predicted feedback and actual feedback is then used to update the internal state prediction. FACTS is able to qualitatively replicate many characteristics of the human speech system: the model is robust to noise in both the sensory and motor pathways, is relatively unaffected by a loss of auditory feedback but is more significantly impacted by the loss of somatosensory feedback, and responds appropriately to externally-imposed alterations of auditory and somatosensory feedback. The model also replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback and shows for the first time how this relationship may be mediated by acuity in each sensory domain. These results have important implications for our understanding of the speech motor control system in humans.


Assuntos
Modelos Biológicos , Destreza Motora/fisiologia , Fala/fisiologia , Biologia Computacional , Retroalimentação Sensorial/fisiologia , Humanos , Córtex Sensório-Motor/fisiologia
6.
Nat Commun ; 10(1): 3958, 2019 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-31477711

RESUMO

Despite well-established anatomical differences between primary and non-primary auditory cortex, the associated representational transformations have remained elusive. Here we show that primary and non-primary auditory cortex are differentiated by their invariance to real-world background noise. We measured fMRI responses to natural sounds presented in isolation and in real-world noise, quantifying invariance as the correlation between the two responses for individual voxels. Non-primary areas were substantially more noise-invariant than primary areas. This primary-nonprimary difference occurred both for speech and non-speech sounds and was unaffected by a concurrent demanding visual task, suggesting that the observed invariance is not specific to speech processing and is robust to inattention. The difference was most pronounced for real-world background noise-both primary and non-primary areas were relatively robust to simple types of synthetic noise. Our results suggest a general representational transformation between auditory cortical stages, illustrating a representational consequence of hierarchical organization in the auditory system.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ruído , Fala/fisiologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Feminino , Humanos , Imagem por Ressonância Magnética/métodos , Masculino , Som , Adulto Jovem
7.
Laterality ; 24(6): 707-739, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31399020

RESUMO

Several non-verbal perceptual and attentional processes have been linked with specialization of the right cerebral hemisphere. Given that most people have a left hemispheric specialization for language, it is tempting to assume that functions of these two classes of dominance are related. Unfortunately, such models of complementarity are notoriously hard to test. Here we suggest a method which compares frequency of a particular perceptual asymmetry with known frequencies of left hemispheric language dominance in right-handed and non-right handed groups. We illustrate this idea using the greyscales and colourscales tasks, chimeric faces, emotional dichotic listening, and a consonant-vowel dichotic listening task. Results show a substantial "breadth" of leftward bias on the right hemispheric tasks and rightward bias on verbal dichotic listening. Right handers and non-right handers did not differ in terms of proportions of people who were left biased for greyscales/colourscales. Support for reduced typical biases in non-right handers was found for chimeric faces and for CV dichotic listening. Results are discussed in terms of complementary theories of cerebral asymmetries, and how this type of method could be used to create a taxonomy of lateralized functions, each categorized as related to speech and language dominance, or not.


Assuntos
Dominância Cerebral/fisiologia , Lateralidade Funcional/fisiologia , Percepção/fisiologia , Adulto , Atenção/fisiologia , Testes com Listas de Dissílabos , Emoções , Feminino , Humanos , Masculino , Desempenho Psicomotor/fisiologia , Caracteres Sexuais , Fala/fisiologia , Percepção Visual , Adulto Jovem
8.
Nat Commun ; 10(1): 3636, 2019 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-31406118

RESUMO

Human speech possesses a rich hierarchical structure that allows for meaning to be altered by words spaced far apart in time. Conversely, the sequential structure of nonhuman communication is thought to follow non-hierarchical Markovian dynamics operating over only short distances. Here, we show that human speech and birdsong share a similar sequential structure indicative of both hierarchical and Markovian organization. We analyze the sequential dynamics of song from multiple songbird species and speech from multiple languages by modeling the information content of signals as a function of the sequential distance between vocal elements. Across short sequence-distances, an exponential decay dominates the information in speech and birdsong, consistent with underlying Markovian processes. At longer sequence-distances, the decay in information follows a power law, consistent with underlying hierarchical processes. Thus, the sequential organization of acoustic elements in two learned vocal communication signals (speech and birdsong) shows functionally equivalent dynamics, governed by similar processes.


Assuntos
Acústica , Tentilhões/fisiologia , Fala/fisiologia , Vocalização Animal/fisiologia , Animais , Humanos , Linguagem , Linguística
9.
Biomed Res Int ; 2019: 1817906, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31467870

RESUMO

Objective: The current study aimed to investigate the effects of body position on the level and severity of stuttering in young adults with developmental stuttering. Methods: A total of 24 subjects (male: 17; female: 7; mean age: 24.9 ± 6.2 years) with developmental stuttering participated. The participants were asked to perform oral reading and spontaneous monologue-speaking tasks in different body postures while their speech was recorded. During reading and speaking tasks, the Stuttering Severity Instrument was used to quantify the severity of stuttering. The effects of different body postures on stuttering severity, reading task, and speaking task scores were analyzed. Results: Significant differences in stuttering severity, reading task, and speaking task scores were found for different body postures. Post hoc analyses revealed a significant difference in stuttering severity, reading task, and speaking task scores when subjects were sitting on a chair with no arm support compared to lying down (p<0.05). Similarly, there were significant differences for two sitting positions (sitting on a chair with no arm support vs sitting on a chair with arm support (p<0.05)). Conclusions: Body postures or body segment positions that relax and facilitate the muscles of the neck and shoulders may potentially improve speech fluency in young adults with developmental stuttering.


Assuntos
Postura/fisiologia , Fala/fisiologia , Gagueira/fisiopatologia , Adolescente , Adulto , Criança , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Índice de Gravidade de Doença , Medida da Produção da Fala/métodos , Adulto Jovem
10.
Int J Med Inform ; 130: 103938, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31442847

RESUMO

OBJECTIVE: To assess the role of speech recognition (SR) technology in clinicians' documentation workflows by examining use of, experience with and opinions about this technology. MATERIALS AND METHODS: We distributed a survey in 2016-2017 to 1731 clinician SR users at two large medical centers in Boston, Massachusetts and Aurora, Colorado. The survey asked about demographic and clinical characteristics, SR use and preferences, perceived accuracy, efficiency, and usability of SR, and overall satisfaction. Associations between outcomes (e.g., satisfaction) and factors (e.g., error prevalence) were measured using ordinal logistic regression. RESULTS: Most respondents (65.3%) had used their SR system for under one year. 75.5% of respondents estimated seeing 10 or fewer errors per dictation, but 19.6% estimated half or more of errors were clinically significant. Although 29.4% of respondents did not include SR among their preferred documentation methods, 78.8% were satisfied with SR, and 77.2% agreed that SR improves efficiency. Satisfaction was associated positively with efficiency and negatively with error prevalence and editing time. Respondents were interested in further training about using SR effectively but expressed concerns regarding software reliability, editing and workflow. DISCUSSION: Compared to other documentation methods (e.g., scribes, templates, typing, traditional dictation), SR has emerged as an effective solution, overcoming limitations inherent in other options and potentially improving efficiency while preserving documentation quality. CONCLUSION: While concerns about SR usability and accuracy persist, clinicians expressed positive opinions about its impact on workflow and efficiency. Faster and better approaches are needed for clinical documentation, and SR is likely to play an important role going forward.


Assuntos
Documentação/métodos , Registros Eletrônicos de Saúde/estatística & dados numéricos , Registros Eletrônicos de Saúde/normas , Pessoal de Saúde/estatística & dados numéricos , Erros Médicos/estatística & dados numéricos , Interface para o Reconhecimento da Fala/estatística & dados numéricos , Fala/fisiologia , Adulto , Idoso , Boston , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção , Inquéritos e Questionários , Fluxo de Trabalho
11.
IEEE Int Conf Rehabil Robot ; 2019: 689-693, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31374711

RESUMO

For individuals with severe motor deficiencies, controlling external devices such as robotic arms or wheelchairs can be challenging, as many devices require some degree of motor control to be operated, e.g. when controlled using a joystick. A brain-computer interface (BCI) relies only on signals from the brain and may be used as a controller instead of muscles. Motor imagery (MI) has been used in many studies as a control signal for BCIs. However, MI may not be suitable for all control purposes, and several people cannot obtain BCI control with MI. In this study, the aim was to investigate the feasibility of decoding covert speech from single-trial EEG and compare and combine it with MI. In seven healthy subjects, EEG was recorded with twenty-five channels during six different actions: Speaking three words (both covert and overt speech), two arm movements (both motor imagery and execution), and one idle class. Temporal and spectral features were derived from the epochs and classified with a random forest classifier. The average classification accuracy was $67 \pm 9$ % and $75\pm 7$ % for covert and overt speech, respectively; this was 5-10 % lower than the movement classification. The performance of the combined movement-speech decoder was $61 \pm 9$ % and $67\pm 7$ % (covert and overt), but it is possible to have more classes available for control. The possibility of using covert speech for controlling a BCI was outlined; this is a step towards a multimodal BCI system for improved usability.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Fala/fisiologia , Estudos de Viabilidade , Feminino , Humanos , Masculino , Atividade Motora/fisiologia , Movimento , Adulto Jovem
12.
Nat Commun ; 10(1): 3096, 2019 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-31363096

RESUMO

Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.


Assuntos
Mapeamento Encefálico/métodos , Córtex Cerebral/fisiologia , Fala/fisiologia , Interfaces Cérebro-Computador , Eletrocorticografia/instrumentação , Eletrocorticografia/métodos , Eletrodos Implantados , Epilepsia/diagnóstico , Epilepsia/fisiopatologia , Feminino , Humanos , Fatores de Tempo
13.
Artigo em Inglês | MEDLINE | ID: mdl-31295837

RESUMO

Background: This study aimed to describe the oral impact (estimate, severity, frequency) on daily performance (e.g., eating, speaking) and identify the potential perceived oral impairment(s) and socio-behavioral factors associated with oral impact, namely presence or absence of oral impact, among children aged 9-12 years old in Al-Madinah Al-Munawwarah, Saudi Arabia. Methods: A cross-sectional convenience sample of 186 children aged 9-12 years old was recruited. Sociodemographic characteristics, oral health-related behaviors, and perceived oral impairments (e.g., caries, toothache) were obtained from participants. The validated Arabic Child Oral Impact on Daily Performance (C-OIDP) inventory was used to assess oral impacts. Sample descriptive statistics and multivariable logistic regressions modeling the association between C-OIDP and explanatory variables were performed. Results: The mean (±SD) age of the children was 10.29 ± 1.24 years, 66.4% were from public schools, and 52% were females. At least one C-OIDP was reported by 78% of the participants. The mean C-OIDP score was 2.27 ± 1.99. Toothache was reported as a perceived impairment for almost all oral impacts and was the strongest predictor of C-OIDP. Low father income was negatively associated with C-OIDP (odds ratio (OR) = 0.24, 95% confidence interval (CI) = 0.10-0.62). Females had significantly higher odds of reporting C-OIDP than males. Conclusions: In this convenience sample, a high percentage of children aged 9-12 years old reported C-OIDP, which was linked to oral impairment and socio-demographic factors. Further studies, however, are required to explore the clinical, behavioral, and sociodemographic factors in relationship to C-OIDP among Saudi children in a representative sample.


Assuntos
Atitude Frente a Saúde , Comportamento Infantil , Ingestão de Alimentos , Comportamentos Relacionados com a Saúde , Saúde Bucal , Fala , Atividades Cotidianas , Criança , Comportamento Infantil/fisiologia , Comportamento Infantil/psicologia , Estudos Transversais , Ingestão de Alimentos/fisiologia , Ingestão de Alimentos/psicologia , Feminino , Indicadores Básicos de Saúde , Humanos , Modelos Logísticos , Masculino , Psicometria , Qualidade de Vida , Arábia Saudita , Autorrelato , Fatores Socioeconômicos , Fala/fisiologia
14.
Acta Psychol (Amst) ; 199: 102888, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31349029

RESUMO

Interference in picture naming occurs from representing a partner's preparations to speak (Gambi, van de Cavey, & Pickering, 2015). We tested the origins of this interference using a simple non-communicative joint naming task based on Gambi et al. (2015), where response latencies indexed interference from partner task and partner speech content, and eye fixations to partner objects indexed overt attention. Experiment 1 contrasted a partner-present condition with a control partner-absent condition to establish the role of the partner in eliciting interference. For latencies, we observed interference from the partner's task and speech content, with interference increasing due to partner task in the partner-present condition. Eye-tracking measures showed that interference in naming was not due to overt attention to partner stimuli but to broad expectations about likely utterances. Experiment 2 examined whether an equivalent non-verbal task also elicited interference, as predicted from a language as joint action framework. We replicated the finding of interference due to partner task and again found no relationship between overt attention and interference. These results support Gambi et al. (2015). Individuals co-represent a partner's task while speaking, and doing so does not require overt attention to partner stimuli.


Assuntos
Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Semântica , Fala/fisiologia , Adolescente , Adulto , Atenção/fisiologia , Feminino , Humanos , Linguagem , Masculino , Adulto Jovem
15.
Psychon Bull Rev ; 26(5): 1690-1696, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31290010

RESUMO

In interactive models of speech production, wordforms that are related to a target form are co-activated during lexical planning, and co-activated wordforms can leave phonetic traces on the target. This mechanism has been proposed to account for phonetic similarities among morphologically related wordforms. We test this hypothesis in a Javanese verb paradigm. In Javanese, one class of verbs is inflected by nasalizing an initial voiceless obstruent: one form of each word begins with a nasal, while its otherwise identical relative begins with a voiceless obstruent. We predict that if morphologically related forms are co-activated during production, the nasal-initial forms of these words should show phonetic traces of their obstruent-initial forms, as compared to nasal-initial wordforms that do not alternate. Twenty-seven native Javanese speakers produced matched pairs of alternating and non-alternating wordforms. Based on an acoustic analysis of nasal resonance and closure duration, we present good evidence against the original hypothesis: We find that the alternating nasals are phonetically identical to the non-alternating ones on both measures. We argue that interactive effects during lexical planning do not offer the best account for morphologically conditioned phonetic similarities. We discuss an alternative involving competition between phonotactic constraints and word-specific phonological structures.


Assuntos
Fonética , Psicolinguística , Fala/fisiologia , Adulto , Humanos , Indonésia , Acústica da Fala
16.
PLoS One ; 14(5): e0217404, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31150442

RESUMO

Everyday speech is produced with an intricate timing pattern and rhythm. Speech units follow each other with short interleaving pauses, which can be either bridged by fillers (erm, ah) or empty. Through their syntactic positions, pauses connect to the thoughts expressed. We investigated whether disturbances of thought in schizophrenia are manifest in patterns at this level of linguistic organization, whether these are seen in first degree relatives (FDR) and how specific they are to formal thought disorder (FTD). Spontaneous speech from 15 participants without FTD (SZ-FTD), 15 with FTD (SZ+FTD), 15 FDRs and 15 neurotypical controls (NC) was obtained from a comic strip retelling task and rated for pauses subclassified by syntactic position and duration. SZ-FTD produced significantly more unfilled pauses than NC in utterance-initial positions and before embedded clauses. Unfilled pauses occurring within clausal units did not distinguish any groups. SZ-FTD also differed from SZ+FTD in producing significantly more pauses before embedded clauses. SZ+FTD differed from NC and FDR only in producing longer utterance-initial pauses. FDRs produced significantly fewer fillers than NC. Results reveal that the temporal organization of speech is an important window on disturbances of the thought process and how these relate to language.


Assuntos
Esquizofrenia/fisiopatologia , Fala/fisiologia , Adulto , Cognição/fisiologia , Feminino , Humanos , Linguagem , Masculino , Pessoa de Meia-Idade , Psicologia do Esquizofrênico
17.
Psychon Bull Rev ; 26(5): 1711-1718, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31197755

RESUMO

The observation-execution links underlying automatic-imitation processes are suggested to result from associative sensorimotor experience of performing and watching the same actions. Past research supporting the associative sequence learning (ASL) model has demonstrated that sensorimotor training modulates automatic imitation of perceptually transparent manual actions, but ASL has been criticized for not being able to account for opaque actions, such as orofacial movements that include visual speech. To investigate whether the observation-execution links underlying opaque actions are as flexible as has been demonstrated for transparent actions, we tested whether sensorimotor training modulated the automatic imitation of visual speech. Automatic imitation was defined as a facilitation in response times for syllable articulation (ba or da) when in the presence of a compatible visual speech distractor, relative to when in the presence of an incompatible distractor. Participants received either mirror (say /ba/ when the speaker silently says /ba/, and likewise for /da/) or countermirror (say /da/ when the speaker silently says /ba/, and vice versa) training, and automatic imitation was measured before and after training. The automatic-imitation effect was enhanced following mirror training and reduced following countermirror training, suggesting that sensorimotor learning plays a critical role in linking speech perception and production, and that the links between these two systems remain flexible in adulthood. Additionally, as compared to manual movements, automatic imitation of speech was susceptible to mirror training but was relatively resilient to countermirror training. We propose that social factors and the multimodal nature of speech might account for the resilience to countermirror training of sensorimotor associations of speech actions.


Assuntos
Atenção/fisiologia , Comportamento Imitativo/fisiologia , Aprendizagem/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Feminino , Humanos , Masculino
18.
Front Neurol Neurosci ; 44: 1-14, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31220844

RESUMO

Of the main principles of human neuropsychology, the best known may be cerebral specialization: the left and right hemispheres play different roles in language and other higher-order functions. This chapter discusses when and how and by whom the differences were found. It begins with an account of Gall's cortical localization theory, which set the stage. It then describes the discoveries themselves, reviews how the differences were explained, and concludes with a summary of further developments.


Assuntos
Encéfalo/fisiopatologia , Demência/psicologia , Neuropsicologia/história , Fala/fisiologia , Encéfalo/fisiologia , Emoções/fisiologia , História do Século XVIII , História do Século XIX , Humanos
19.
Psychol Aging ; 34(6): 791-804, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31204834

RESUMO

Contemporary research on aging has provided mixed evidence for whether older adults are less effective than younger adults at designing and delivering spoken utterances. However, most of these studies have focused on only specific aspects of this process. In addition, they tend to vary significantly in terms of the degree of complexity in their chosen stimuli or task. The present study compares younger and older adults' performance using a referential production paradigm involving simple everyday objects. We varied referential context such that a target object was either unique in its category (e.g., one shirt), or was accompanied by a same-category object (e.g., two shirts). We evaluated whether speakers' descriptions provided listeners with sufficient information for identification, and whether speakers spontaneously adapt their speech for different addressee types (younger adult, older adult, automated dialogue system). A variety of measures were included to provide a comprehensive perspective on adults' performance. Interestingly, the results revealed few or no age differences in measures related to production performance (speech onset latency, speech rate, and fluency). In contrast, consistent differences were observed for measures related to descriptive content, both in terms of informativity and variability in lexical selection: Older adults not only provided more information than necessary for referential success (e.g., superfluous modifiers), but also exhibited greater variability in their selection of modifiers. The results show that, although certain aspects of the production process are well-preserved across the adult lifespan, meaningful age-related differences can still be found in simple referential tasks with everyday objects. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Envelhecimento/psicologia , Comunicação , Rememoração Mental/fisiologia , Fala , Comportamento Verbal/fisiologia , Adolescente , Idoso , Envelhecimento/fisiologia , Percepção Auditiva , Feminino , Humanos , Longevidade , Masculino , Pessoa de Meia-Idade , Fala/fisiologia , Adulto Jovem
20.
Psychiatry Res ; 273: 767-769, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-31207864

RESUMO

Evaluating patients' verbal fluency by counting the number of unique words (e.g., animals) produced in a short-period (e.g., 1-3 min) is one of the most widely employed cognitive tests in psychiatric research. We introduce new methods to analyze fluency output that leverage modern computational language technology. This enables moving beyond simple word counts to charting the temporal dynamics of speech and objectively quantifying the semantic relationship of the utterances. These metrics can greatly expand the current psychiatric research toolkit and can help refine clinical theories regarding the nature of putative language differences in patients.


Assuntos
Testes de Linguagem , Testes Neuropsicológicos , Psiquiatria/métodos , Fala/fisiologia , Comportamento Verbal/fisiologia , Adulto , Feminino , Humanos , Linguagem , Masculino , Psiquiatria/tendências , Semântica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA