Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23.754
Filtrar
1.
Sensors (Basel) ; 21(19)2021 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-34640780

RESUMO

Within the field of Automatic Speech Recognition (ASR) systems, facing impaired speech is a big challenge because standard approaches are ineffective in the presence of dysarthria. The first aim of our work is to confirm the effectiveness of a new speech analysis technique for speakers with dysarthria. This new approach exploits the fine-tuning of the size and shift parameters of the spectral analysis window used to compute the initial short-time Fourier transform, to improve the performance of a speaker-dependent ASR system. The second aim is to define if there exists a correlation among the speaker's voice features and the optimal window and shift parameters that minimises the error of an ASR system, for that specific speaker. For our experiments, we used both impaired and unimpaired Italian speech. Specifically, we used 30 speakers with dysarthria from the IDEA database and 10 professional speakers from the CLIPS database. Both databases are freely available. The results confirm that, if a standard ASR system performs poorly with a speaker with dysarthria, it can be improved by using the new speech analysis. Otherwise, the new approach is ineffective in cases of unimpaired and low impaired speech. Furthermore, there exists a correlation between some speaker's voice features and their optimal parameters.


Assuntos
Disartria , Percepção da Fala , Humanos , Fala , Distúrbios da Fala , Interface para o Reconhecimento da Fala
2.
Sensors (Basel) ; 21(19)2021 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-34640824

RESUMO

The use of imagined speech with electroencephalographic (EEG) signals is a promising field of brain-computer interfaces (BCI) that seeks communication between areas of the cerebral cortex related to language and devices or machines. However, the complexity of this brain process makes the analysis and classification of this type of signals a relevant topic of research. The goals of this study were: to develop a new algorithm based on Deep Learning (DL), referred to as CNNeeg1-1, to recognize EEG signals in imagined vowel tasks; to create an imagined speech database with 50 subjects specialized in imagined vowels from the Spanish language (/a/,/e/,/i/,/o/,/u/); and to contrast the performance of the CNNeeg1-1 algorithm with the DL Shallow CNN and EEGNet benchmark algorithms using an open access database (BD1) and the newly developed database (BD2). In this study, a mixed variance analysis of variance was conducted to assess the intra-subject and inter-subject training of the proposed algorithms. The results show that for intra-subject training analysis, the best performance among the Shallow CNN, EEGNet, and CNNeeg1-1 methods in classifying imagined vowels (/a/,/e/,/i/,/o/,/u/) was exhibited by CNNeeg1-1, with an accuracy of 65.62% for BD1 database and 85.66% for BD2 database.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Algoritmos , Eletroencefalografia , Humanos , Fala
3.
Sensors (Basel) ; 21(19)2021 Oct 05.
Artigo em Inglês | MEDLINE | ID: mdl-34640942

RESUMO

Augmenting reality via head-mounted displays (HMD-AR) is an emerging technology in education. The interactivity provided by HMD-AR devices is particularly promising for learning, but presents a challenge to human activity recognition, especially with children. Recent technological advances regarding speech and gesture recognition concerning Microsoft's HoloLens 2 may address this prevailing issue. In a within-subjects study with 47 elementary school children (2nd to 6th grade), we examined the usability of the HoloLens 2 using a standardized tutorial on multimodal interaction in AR. The overall system usability was rated "good". However, several behavioral metrics indicated that specific interaction modes differed in their efficiency. The results are of major importance for the development of learning applications in HMD-AR as they partially deviate from previous findings. In particular, the well-functioning recognition of children's voice commands that we observed represents a novelty. Furthermore, we found different interaction preferences in HMD-AR among the children. We also found the use of HMD-AR to have a positive effect on children's activity-related achievement emotions. Overall, our findings can serve as a basis for determining general requirements, possibilities, and limitations of the implementation of educational HMD-AR environments in elementary school classrooms.


Assuntos
Realidade Aumentada , Óculos Inteligentes , Criança , Humanos , Instituições Acadêmicas , Fala
4.
Sensors (Basel) ; 21(17)2021 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-34502736

RESUMO

Mental health is as crucial as physical health, but it is underappreciated by mainstream biomedical research and the public. Compared to the use of AI or robots in physical healthcare, the use of AI or robots in mental healthcare is much more limited in number and scope. To date, psychological resilience-the ability to cope with a crisis and quickly return to the pre-crisis state-has been identified as an important predictor of psychological well-being but has not been commonly considered by AI systems (e.g., smart wearable devices) or social robots to personalize services such as emotion coaching. To address the dearth of investigations, the present study explores the possibility of estimating personal resilience using physiological and speech signals measured during human-robot conversations. Specifically, the physiological and speech signals of 32 research participants were recorded while the participants answered a humanoid social robot's questions about their positive and negative memories about three periods of their lives. The results from machine learning models showed that heart rate variability and paralinguistic features were the overall best predictors of personal resilience. Such predictability of personal resilience can be leveraged by AI and social robots to improve user understanding and has great potential for various mental healthcare applications in the future.


Assuntos
Robótica , Comunicação , Frequência Cardíaca , Humanos , Interação Social , Fala
5.
Sensors (Basel) ; 21(17)2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34502785

RESUMO

Speech signals are being used as a primary input source in human-computer interaction (HCI) to develop several applications, such as automatic speech recognition (ASR), speech emotion recognition (SER), gender, and age recognition. Classifying speakers according to their age and gender is a challenging task in speech processing owing to the disability of the current methods of extracting salient high-level speech features and classification models. To address these problems, we introduce a novel end-to-end age and gender recognition convolutional neural network (CNN) with a specially designed multi-attention module (MAM) from speech signals. Our proposed model uses MAM to extract spatial and temporal salient features from the input data effectively. The MAM mechanism uses a rectangular shape filter as a kernel in convolution layers and comprises two separate time and frequency attention mechanisms. The time attention branch learns to detect temporal cues, whereas the frequency attention module extracts the most relevant features to the target by focusing on the spatial frequency features. The combination of the two extracted spatial and temporal features complements one another and provide high performance in terms of age and gender classification. The proposed age and gender classification system was tested using the Common Voice and locally developed Korean speech recognition datasets. Our suggested model achieved 96%, 73%, and 76% accuracy scores for gender, age, and age-gender classification, respectively, using the Common Voice dataset. The Korean speech recognition dataset results were 97%, 97%, and 90% for gender, age, and age-gender recognition, respectively. The prediction performance of our proposed model, which was obtained in the experiments, demonstrated the superiority and robustness of the tasks regarding age, gender, and age-gender recognition from speech signals.


Assuntos
Fala , Voz , Emoções , Humanos , Idioma , Redes Neurais de Computação
6.
Am J Gastroenterol ; 116(9): 1950-1953, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34465696

RESUMO

INTRODUCTION: There are no available low-burden, point-of-care tests to diagnose, grade, and predict hepatic encephalopathy (HE). METHODS: We evaluated speech as a biomarker of HE in 76 English-speaking adults with cirrhosis. RESULTS: Three speech features significantly correlated with the following neuropsychiatric scores: speech rate, word duration, and use of particles. Patients with low neuropsychiatric scores had slower speech (22 words/min, P = 0.01), longer word duration (0.09 seconds/word, P = 0.01), and used fewer particles (0.85% fewer, P = 0.01). Patients with a history of overt HE had slower speech (23 words/min, P = 0.005) and longer word duration (0.09 seconds/word, P = 0.005). DISCUSSION: HE is associated with slower speech.


Assuntos
Encefalopatia Hepática/complicações , Distúrbios da Fala/etiologia , Fala , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
7.
BMJ Open ; 11(9): e046609, 2021 09 06.
Artigo em Inglês | MEDLINE | ID: mdl-34489271

RESUMO

OBJECTIVE: This study aimed to assess the cost-effectiveness of combined scalp acupuncture therapy with speech and language therapy for patients with Broca's aphasia after stroke. DESIGN: A within-trial cost-effectiveness analysis. SETTINGS: Community health centres. SUBJECTS: A total of 203 participants with Broca's aphasia after stroke who had been randomly assigned to receive scalp acupuncture with speech and language therapy (intervention) or speech and language therapy alone (control). INTERVENTION: Both groups underwent speech and language therapy (30 min per day, 5 days a week, for 4 weeks), while the intervention group simultaneously received scalp acupuncture. PRIMARY OUTCOMES: All outcomes were collected at baseline, and after the 4-week intervention and 12-week follow-up. Cost-effectiveness measures included the Chinese Rehabilitation Research Center Standard Aphasia Examination (CRRCAE) and Boston Diagnostic Aphasia Examination (BDAE). Cost-utility was evaluated using quality-adjusted life-years (QALYs). Incremental cost-effectiveness ratios were expressed, and sensitivity analysis was conducted. RESULTS: The total cost to deliver the intervention was €4001.72, whereas it was €4323.57 for the control group. The incremental cost-effectiveness ratios showed that the intervention was cost-effective (€495.1 per BDAE grade gained; €1.8 per CRRCAE score gained; €4597.1 per QALYs gained) relative to the control over the 12 weeks. The intervention had a 56.4% probability of being cost-effective at the ¥50 696 (€6905.87) Gross Domestic Product (GDP) per capita threshold. Sensitivity analyses confirmed the robustness of the results. CONCLUSIONS: Compared with speech and language therapy alone, the addition of scalp acupuncture was cost-effective in Chinese communities. As the costs of acupuncture services in China are likely to differ from other countries, these results should be carefully interpreted and remain to be confirmed in other populations. TRIAL REGISTRATION NUMBER: ChiCTR-TRC-13003703.


Assuntos
Terapia por Acupuntura , Afasia , Acidente Vascular Cerebral , Afasia/etiologia , Afasia/terapia , Análise Custo-Benefício , Humanos , Terapia da Linguagem , Couro Cabeludo , Fala , Acidente Vascular Cerebral/complicações
8.
BMJ Open ; 11(9): e047083, 2021 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-34475154

RESUMO

INTRODUCTION: Early detection of cognitive impairments is crucial for the successful implementation of preventive strategies. However, in rural isolated areas or so-called 'medical deserts', access to diagnosis and care is very limited. With the current pandemic crisis, now even more than ever, remote solutions such as telemedicine platforms represent great potential and can help to overcome this barrier. Moreover, current advances made in voice and image analysis can help overcome the barrier of physical distance by providing additional information on a patients' emotional and cognitive state. Therefore, the aim of this study is to evaluate the feasibility and reliability of a videoconference system for remote cognitive testing empowered by automatic speech and video analysis. METHODS AND ANALYSIS: 60 participants (aged 55 and older) with and without cognitive impairment will be recruited. A complete neuropsychological assessment including a short clinical interview will be administered in two conditions, once by telemedicine and once by face-to-face. The order of administration procedure will be counterbalanced so half of the sample starts with the videoconference condition and the other half with the face-to-face condition. Acceptability and user experience will be assessed among participants and clinicians in a qualitative and quantitative manner. Speech and video features will be extracted and analysed to obtain additional information on mood and engagement levels. In a subgroup, measurements of stress indicators such as heart rate and skin conductance will be compared. ETHICS AND DISSEMINATION: The procedures are not invasive and there are no expected risks or burdens to participants. All participants will be informed that this is an observational study and their consent taken prior to the experiment. Demonstration of the effectiveness of such technology makes it possible to diffuse its use across all rural areas ('medical deserts') and thus, to improve the early diagnosis of neurodegenerative pathologies, while providing data crucial for basic research. Results from this study will be published in peer-reviewed journals.


Assuntos
Fala , Telemedicina , Idoso , Cognição , Estudos de Viabilidade , Humanos , Estudos Observacionais como Assunto , Reprodutibilidade dos Testes
9.
Artigo em Chinês | MEDLINE | ID: mdl-34521164

RESUMO

Objective: To investigate the development of auditory speech perception and spatial hearing abilities within one year after cochlear implantation in preschool prelingual deaf children and the relationship between the two abilities. Methods: This retrospective study analyzed 31 preschool children with an average age of (2.3±1.2) years. All cases were assessed at pre-implant, 6 months and 12 months post-implant using the Infant-toddler Meaningful Auditory Integration Scale (IT-MAIS), the Meaningful Auditory Integration Scale (MAIS) and the Mandarin Early Speech Perception test (MESP) to evaluate their listening and speech perception abilities, and using the Speech,Spatial,and Other Qualities of Hearing Scale for Parents (SSQ-P) questionnaires to evaluate their speech perception and spatial hearing abilities. SPSS 23.0 was used for the statistical analysis. Results: All children performed better at 6 months and 12 months post-implant with IT-MAIS/MAIS, MESP than pre-implant, and the scoring rate continued to improve, with a significant difference (P<0.01). For the SSQ-P (Speech) and SSQ-P (Spatial) scores, the mean scores of pre-implant were (0.9±0.2) points and (0.8±0.3) points, those of 6 months post-implant were (4.6±0.2) and (2.6±0.3), and 12 months post-implant were (6.2±0.2) and (6.3±0.3), the scores of the two groups were significantly different at pre-implant, 6 months and 12 months post-implant (P<0.01). The growth rate of SSQ-P (Spatial) from pre-implant to 12 months post-implant was 675.3%, and the growth rate from 6 months post-implant to 12 months post-implant was 140.6%, the growth rate showed an significant increase compared with IT-MAIS/MAIS, MESP and SSQ-P (Speech).SSQ-P (Speech) and SSQ-P (Spatial) scores were moderate correlation at 12 months post-implant(r=0.465, P=0.008). Conclusions: Within one year after cochlear implantation, listening, speech perception and spatial hearing abilities of preschool prelingual deaf children could show a comprehensive, continuous and significant progress as the implantation time increasing. The growth rate of spatial hearing is greater than that of speech perception at 12 months post-implant, and the spatial hearing could still show rapid development characteristics after 6 months post-implant.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Percepção Auditiva , Pré-Escolar , Surdez/cirurgia , Audição , Humanos , Lactente , Estudos Retrospectivos , Fala
10.
Saudi Med J ; 42(9): 1031-1035, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34470843

RESUMO

OBJECTIVES: To validate an Arabic version of the LittlEARS® Early Speech Production Questionnaire (LEESPQ), which assesses the early development of speech and language in infants between 0 and 18 months, in Arabic-speaking children with normal hearing in Saudi Arabia. METHODS: This is a cross-sectional study conducted in the city of Riyadh, Saudi Arabia between September and December 2020. Parents completed the LEESPQ regarding their child's speech production development. To assess the ability of normal hearing children aged 0-18 months in developing speech and language production, a norm curve has been generated based on the standardized values that were calculated from the Arabic normal-hearing data set. RESULTS: A total of 198 questionnaires were analyzed. The total score on the LEESPQ correlated with age, gender, and bilingualism. A norm curve for early speech production in children with normal hearing was created. CONCLUSION: The Arabic version of LEESPQ appears to be a valid questionnaire that can be used in the assessment of early language and speech development of Arabic-speaking children with normal hearing in the age range of 0-18 months. The Arabic version of the LEESPQ might also be a useful tool to detect developmental delays and hearing disorders in young children.


Assuntos
Desenvolvimento da Linguagem , Fala , Criança , Pré-Escolar , Estudos Transversais , Audição , Humanos , Lactente , Recém-Nascido , Inquéritos e Questionários
11.
Neuropsychologia ; 161: 108012, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34474065

RESUMO

Individuals typically exhibit better cross-sensory perception following unisensory loss, demonstrating improved perception of information available from the remaining senses and increased cross-sensory use of neural resources. Even individuals with no sensory loss will exhibit such changes in cross-sensory processing following temporary sensory deprivation, suggesting that the brain's capacity for recruiting cross-sensory sources to compensate for degraded unisensory input is a general characteristic of the perceptual process. Many studies have investigated how auditory and visual neural structures respond to within- and cross-sensory input. However, little attention has been given to how general auditory and visual neural processing relates to within and cross-sensory perception. The current investigation examines the extent to which individual differences in general auditory neural processing accounts for variability in auditory, visual, and audiovisual speech perception in a sample of young healthy adults. Auditory neural processing was assessed using a simple click stimulus. We found that individuals with a smaller P1 peak amplitude in their auditory-evoked potential (AEP) had more difficulty identifying speech sounds in difficult listening conditions, but were better lipreaders. The results suggest that individual differences in the auditory neural processing of healthy adults can account for variability in the perception of information available from the auditory and visual modalities, similar to the cross-sensory perceptual compensation observed in individuals with sensory loss.


Assuntos
Leitura Labial , Percepção da Fala , Estimulação Acústica , Adulto , Percepção Auditiva , Humanos , Ruído , Fala , Percepção Visual
12.
Codas ; 33(4): e20200106, 2021.
Artigo em Português, Inglês | MEDLINE | ID: mdl-34550214

RESUMO

PURPOSE: This study investigated the self-perception of 49 women, monolingual speakers of Brazilian Portuguese, about their tongue position for the alveolar articulation of the fricatives [s] and [z]. METHODS: The video recording of speech samples of these 49 women (ages 18 to 28) were analyzed by three Speech-Language Pathologists. They were classified into two groups: Group 1 (G1, n=25), with no alterations in the tongue position during the production of [s] and [z], and Group 2 (G2, n=24), with alterations in the tongue position during the production of [s] and [z]. The tongue position self-perception experiment required the participants to identify the specific tongue constriction point in the production of [s] and [z] (apical, laminal, or "other") during the reading of 24 words and 24 pseudowords. The Friedman test, with posterior paired comparisons, was used for the intragroup analysis. The Mann-Whitney test was used for intergroup comparisons. The statistical significance adopted was 5% (p<0.05). RESULTS: G1 reported apical and laminal tongue constrictions while GE reported these constrictions plus other tongue adjustments. The presence of other tongue adjustments differentiated the groups, G1 and G2 (p=0,002). There were significant differences between [s] and [z] for G1, with the laminal position occurring more often in [s] compared to [z]. CONCLUSION: Women with and without alteration in the tongue position reported apical and laminal constrictions. Howerer, other tongue adjustments were self-perceived in the presence of altered tongue position.


Assuntos
Fonética , Língua , Adolescente , Adulto , Constrição , Feminino , Humanos , Autoimagem , Fala , Medida da Produção da Fala , Adulto Jovem
13.
Sensors (Basel) ; 21(18)2021 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-34577332

RESUMO

Enterprise systems typically produce a large number of logs to record runtime states and important events. Log anomaly detection is efficient for business management and system maintenance. Most existing log-based anomaly detection methods use log parser to get log event indexes or event templates and then utilize machine learning methods to detect anomalies. However, these methods cannot handle unknown log types and do not take advantage of the log semantic information. In this article, we propose ConAnomaly, a log-based anomaly detection model composed of a log sequence encoder (log2vec) and multi-layer Long Short Term Memory Network (LSTM). We designed log2vec based on the Word2vec model, which first vectorized the words in the log content, then deleted the invalid words through part of speech tagging, and finally obtained the sequence vector by the weighted average method. In this way, ConAnomaly not only captures semantic information in the log but also leverages log sequential relationships. We evaluate our proposed approach on two log datasets. Our experimental results show that ConAnomaly has good stability and can deal with unseen log types to a certain extent, and it provides better performance than most log-based anomaly detection methods.


Assuntos
Aprendizado de Máquina , Memória de Longo Prazo , Semântica , Fala
14.
Sensors (Basel) ; 21(18)2021 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-34577464

RESUMO

The performance of voice-controlled systems is usually influenced by accented speech. To make these systems more robust, frontend accent recognition (AR) technologies have received increased attention in recent years. As accent is a high-level abstract feature that has a profound relationship with language knowledge, AR is more challenging than other language-agnostic audio classification tasks. In this paper, we use an auxiliary automatic speech recognition (ASR) task to extract language-related phonetic features. Furthermore, we propose a hybrid structure that incorporates the embeddings of both a fixed acoustic model and a trainable acoustic model, making the language-related acoustic feature more robust. We conduct several experiments on the AESRC dataset. The results demonstrate that our approach can obtain an 8.02% relative improvement compared with the Transformer baseline, showing the merits of the proposed method.


Assuntos
Fonética , Percepção da Fala , Idioma , Reconhecimento Psicológico , Fala
15.
Dev Psychol ; 57(8): 1195-1209, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34591565

RESUMO

Within a language, there is considerable variation in the pronunciations of words owing to social factors like age, gender, nationality, and race. In the present study, we investigate whether toddlers link social and linguistic variation during word learning. In Experiment 1, 24- to 26-month-old toddlers were exposed to two talkers whose front vowels differed systematically. One talker trained them on a word-referent mapping. At test, toddlers saw the trained object and a novel object; they heard a single novel label from both talkers. Toddlers responded differently to the label as a function of talker. The following experiments demonstrate that toddlers generalize specific pronunciations across speakers of the same race (Experiment 2), but not across speakers who are simply an unfamiliar race (Experiment 3). They also generalize pronunciations based on previous affiliative behavior (Experiment 4). When affiliative behavior and race are pitted against each other, toddlers' linguistic interpretations are more influenced by affiliative behavior (Experiment 5). These experiments suggest that toddlers attend to and link social and speech variation in their environment. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Percepção da Fala , Fala , Pré-Escolar , Humanos , Idioma , Desenvolvimento da Linguagem , Aprendizagem Verbal
16.
Brain Inj ; 35(10): 1275-1283, 2021 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-34499576

RESUMO

OBJECTIVE: Establish objective and subjective speech rate and muscle function differences between athletes with and without sports related concussion (SRC) histories and provide potential motor speech evaluation in SRC. METHODS: Over 1,110 speech samples were obtained from 30, 19-22 year-old athletes who had sustained an SRC within the past 2 years and 30 pair-wise matched control athletes with no history of SRC. Speech rate was measured via average time per syllable, average unvoiced time per syllable, and expert perceptual judgment. Speech muscle function was measured via surface electromyography over the obicularis oris, masseter, and segmental triangle. Group differences were assessed using MANOVA, bootstrapping and predictive ROC analyses. RESULTS: Athletes with SRC had slower speech rates during DDK tasks than controls as evidenced by longer average time per syllable longer average unvoiced time per syllable and expert judgment of slowed rate. Rate measures were predictive of concussion history. Further, athletes with SRC required more speech muscle activation than controls to complete DDK tasks. CONCLUSION: Clear evidence of slowed speech and increased muscle activation during the completion of DDK tasks in athletes with SRC histories relative to controls. Future work should examine speech rate in acute concussion.


Assuntos
Traumatismos em Atletas , Concussão Encefálica , Esportes , Adulto , Atletas , Traumatismos em Atletas/complicações , Concussão Encefálica/complicações , Humanos , Músculos , Fala , Adulto Jovem
17.
Am J Speech Lang Pathol ; 30(5): 2003-2016, 2021 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-34516226

RESUMO

Purpose The purpose of this survey research is to provide preliminary data regarding speech-language pathologists' (SLPs') perceptions of the role that social justice (SJ) plays in their work. As our professional organizations call us to advocate and communicate with regulatory agencies and legislative bodies to promote quality care for all individuals, this topic has become particularly important at this time. At present, there is a lack of data in peer-reviewed publications within the discipline of communication disorders on SJ and even less regarding the perceptions of SLPs on SJ. Method The survey was sent to American Speech-Language-Hearing Association (ASHA)-certified SLPs, identified by the ASHA ProFind database, across six U.S. geographic regions, including both urban and rural communities. Four themes were explored through the survey: (a) importance of SJ, (b) awareness of SJ, (c) current practices related to SJ, and (d) barriers to SJ implementation. Results The majority of respondents view SJ as important to the profession (91.2%) and value the work of creating equality among groups (96.0%). Many SLPs are actively involved in implementing SJ principles in their own practice by accepting Medicaid (40.7%), engaging in political outreach (55.0%), and providing transdisciplinary educational outreach (77.9%). Identified barriers to incorporating SJ include time (62.7%), resources (65.6%), and finances (70.0%). Conclusions Working for SJ is important to a majority of the respondents, and various efforts are implemented to create equal opportunities for service to clients. Barriers continue to exist that limit the degree to which SLPs can work toward SJ. A list of actions to be considered in order to promote SJ in the field is provided. Supplemental Material https://doi.org/10.23641/asha.16584044.


Assuntos
Transtornos da Comunicação , Patologia da Fala e Linguagem , Humanos , Patologistas , Justiça Social , Percepção Social , Fala , Inquéritos e Questionários , Estados Unidos
18.
Neuropsychologia ; 161: 108019, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34487737

RESUMO

It is currently unclear to what degree language control, which minimizes non-target language interference and increases the probability of selecting target-language words, is similar for sign-speech (bimodal) bilinguals and spoken language (unimodal) bilinguals. To further investigate the nature of language control processes in bimodal bilinguals, we conducted the first event-related potential (ERP) language switching study with hearing American Sign Language (ASL)-English bilinguals. The results showed a pattern that has not been observed in any unimodal language switching study: a switch-related positivity over anterior sites and a switch-related negativity over posterior sites during ASL production in both early and late time windows. No such pattern was found during English production. We interpret these results as evidence that bimodal bilinguals uniquely engage language control at the level of output modalities.


Assuntos
Multilinguismo , Potenciais Evocados , Humanos , Idioma , Línguas de Sinais , Fala
19.
Neuropsychologia ; 161: 108023, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34530025

RESUMO

A fundamental educational requirement of beginning reading is to learn, access, and rapidly process associations between novel visuospatial symbols and their phonological representations in speech. Children with difficulties in such cross-modal integration are often divided into dyslexia subtypes, based on whether their primary problem is with the written or spoken component of decoding. The present review suggests that starting in infancy, perceptions of audiovisual speech are integrated by mutual oscillatory phase-resetting between sensory cortices, and throughout development visual and auditory experiences are coupled into unified perceptions. Entirely separate subtypes are incompatible with this view. Visual or auditory deficits will invariably affect processing to some degree in both domains. It is suggested that poor auditory/visual integration may be diagnostic for both forms of dyslexia, stemming from an encoding weakness in the early cross-sensory binding of audiovisual speech. The review presents a model of dyslexia as a dysfunction of the large-scale ventral and dorsal attention networks controlling such binding. Excessive glutamatergic neuronal excitability of the attention networks by the Locus coeruleus-norepinephrine system may interfere with multisensory integration, with deleterious effects on the acquisition of reading by degrading graphene/phoneme conversion.


Assuntos
Dislexia , Percepção da Fala , Criança , Humanos , Locus Cerúleo , Leitura , Fala
20.
Neuropsychologia ; 161: 108022, 2021 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-34530026

RESUMO

Deficits in audiovisual speech perception have consistently been detected in patients with Autism Spectrum Disorder (ASD). Especially for patients with a highly functional subtype of ASD, it remains uncertain whether these deficits and underlying neural mechanisms persist into adulthood. Research indicates differences in audiovisual speech processing between ASD and healthy controls (HC) in the auditory cortex. The temporal dynamics of these differences still need to be characterized. Thus, in the present study we examined 14 adult subjects with high-functioning ASD and 15 adult HC while they viewed visual (lip movements) and auditory (voice) speech information that was either superimposed by white noise (condition 1) or not (condition 2). Subject's performance was quantified by measuring stimulus comprehension. In addition, event-related brain potentials (ERPs) were recorded. Results demonstrated worse speech comprehension for ASD subjects compared to HC under noisy conditions. Moreover, ERP-analysis revealed significantly higher P2 amplitudes over parietal electrodes for ASD subjects compared to HC.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Percepção da Fala , Adulto , Encéfalo , Humanos , Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...