Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Am J Speech Lang Pathol ; 33(3): 1524-1535, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38477644

RESUMO

PURPOSE: Speech-language pathology programs use simulated learning experiences (SLEs) to teach graduate student clinicians about fidelity to therapeutic interventions, including static skills (clinical actions that are delivered in a prespecified way regardless of the client's behavior) and dynamic skills (contingent responses formulated in response to a client's behavior). The purpose of this study was to explore student learning of static and dynamic skills throughout SLEs and live clinical practice. METHOD: Thirty-three speech-language pathology graduate students participated in this study. Students were first trained to deliver an intervention before having their treatment fidelity measured at three time points: an initial SLE, actual clinical practice, and a final SLE. Treatment fidelity was first summarized using an overall accuracy score and then separated by static and dynamic skills. We hypothesized that (a) overall accuracy would increase from the initial simulation to treatment but remain steady from treatment to the final simulation and that (b) students would acquire dynamic skills more slowly than static skills. RESULTS: In line with our hypotheses, students' overall accuracy improved over time. Although accuracy for static skills was mostly established after the first simulation, dynamic skills remained less accurate, with a slower acquisition timeline. CONCLUSIONS: These results demonstrate that SLEs are efficacious in teaching students the clinical skills needed for actual clinical practice. Furthermore, we show that dynamic skills are more difficult for students to learn and implement than static skills, which suggests the need for greater attention to dynamic skill acquisition during clinical education.


Assuntos
Competência Clínica , Educação de Pós-Graduação , Patologia da Fala e Linguagem , Humanos , Patologia da Fala e Linguagem/educação , Masculino , Feminino , Educação de Pós-Graduação/métodos , Adulto , Adulto Jovem , Estudantes de Ciências da Saúde/psicologia , Treinamento por Simulação/métodos , Fatores de Tempo
2.
Artigo em Inglês | MEDLINE | ID: mdl-37624533

RESUMO

Clinical education rotations typically involve an initial training phase followed by supervised clinical practice. However, little research has explored the separate contributions of each component to the development of student confidence and treatment fidelity. The dual purpose of this study was to compare the impact of clinical training format (synchronous vs. asynchronous) and education model (traditional vs. collaborative) on student confidence and treatment fidelity. Thirty-six speech-language pathology graduate students completed this two-phase study during a one-term clinical rotation. Phase 1 investigated the impact of training condition (synchronous, asynchronous guided, asynchronous unguided) on student confidence and treatment fidelity. Phase 2 explored the impact of education model (traditional vs. collaborative) on student confidence and treatment fidelity. Treatment fidelity was measured at the conclusion of Phases 1 and 2. Students rated their confidence at six-time points throughout the study. Our results indicate that training condition did not differentially impact student confidence or treatment fidelity; however, education model did: students in the collaborative education model reported increased confidence compared to students in the traditional education model. Students in the collaborative education model also trended towards having higher treatment fidelity than students in the traditional education model. These results demonstrate that pre-clinical trainings can be effective in several different formats provided they cover the discrete skills needed for the clinical rotation. While preliminary, our results further suggest that students may benefit from working with peers during their clinical rotations.

3.
Am J Speech Lang Pathol ; 32(4): 1698-1704, 2023 07 10.
Artigo em Inglês | MEDLINE | ID: mdl-37276448

RESUMO

PURPOSE: The Wisconsin Card Sorting Test (WCST) is commonly used to measure nonverbal executive functions (EFs) in a variety of clinical populations. However, in some clinical populations (e.g., people with aphasia), deficits may be present in more linguistic (or verbal) domains and less pronounced in nonverbal domains. Thus, when determining possible deficits in these individuals, it is critical to assess both verbal and nonverbal cognitive abilities. The purpose of this study was to create a verbal card sorting task (VCST) to complement the WCST. METHOD: We created the VCST by modifying a computerized version of the WCST, the Berg Card Sorting Task (BCST). We then compared 35 individuals with mild traumatic brain injury (mTBI) and 33 matched controls' performance on each task. We tested the VCST in individuals with mTBI first because they demonstrate impaired EFs but unimpaired language. We therefore expected the mTBI group to perform similarly on the VCST and BCST, suggesting that the two tasks measure EFs similarly. RESULTS: In line with our hypothesis, the mTBI group had unimpaired inhibition and sustained attention but impaired shifting on each task. Component loadings for both tasks were also similar, and participants' inhibition and shifting scores positively correlated across the two tasks. CONCLUSIONS: Together, these findings suggest that the VCST is a potentially useful tool for measuring verbal EF deficits. Our results also provide important insights into the EF impairments experienced by individuals with mTBI. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.23230475.


Assuntos
Concussão Encefálica , Função Executiva , Humanos , Função Executiva/fisiologia , Testes Neuropsicológicos , Cognição/fisiologia , Idioma
4.
Front Psychol ; 14: 1029773, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36777231

RESUMO

Of the three subtypes of attention outlined by the attentional subsystems model, alerting (vigilance or arousal needed for task completion) and executive control (the ability to inhibit distracting information while completing a goal) are susceptible to age-related decline, while orienting remains relatively stable. Yet, few studies have investigated strategies that may acutely maintain or promote attention in typically aging older adults. Music listening may be one potential strategy for attentional maintenance as past research shows that listening to happy music characterized by a fast tempo and major mode increases cognitive task performance, likely by increasing cognitive arousal. The present study sought to investigate whether listening to happy music (fast tempo, major mode) impacts alerting, orienting, and executive control attention in 57 middle and older-aged adults (M = 61.09 years, SD = 7.16). Participants completed the Attention Network Test (ANT) before and after listening to music rated as happy or sad (slow tempo, minor mode), or no music (i.e., silence) for 10 min. Our results demonstrate that happy music increased alerting attention, particularly when relevant and irrelevant information conflicted within a trial. Contrary to what was predicted, sad music modulated executive control performance. Overall, our findings indicate that music written in the major mode with a fast tempo (happy) and minor mode with a slow tempo (sad) modulate different aspects of attention in the short-term.

5.
J Assoc Res Otolaryngol ; 24(1): 67-79, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36471207

RESUMO

Auditory stream segregation and informational masking were investigated in brain-lesioned individuals, age-matched controls with no neurological disease, and young college-age students. A psychophysical paradigm known as rhythmic masking release (RMR) was used to examine the ability of participants to identify a change in the rhythmic sequence of 20-ms Gaussian noise bursts presented through headphones and filtered through generalized head-related transfer functions to produce the percept of an externalized auditory image (i.e., a 3D virtual reality sound). The target rhythm was temporally interleaved with a masker sequence comprising similar noise bursts in a manner that resulted in a uniform sequence with no information remaining about the target rhythm when the target and masker were presented from the same location (an impossible task). Spatially separating the target and masker sequences allowed participants to determine if there was a change in the target rhythm midway during its presentation. RMR thresholds were defined as the minimum spatial separation between target and masker sequences that resulted in 70.7% correct-performance level in a single-interval 2-alternative forced-choice adaptive tracking procedure. The main findings were (1) significantly higher RMR thresholds for individuals with brain lesions (especially those with damage to parietal areas) and (2) a left-right spatial asymmetry in performance for lesion (but not control) participants. These findings contribute to a better understanding of spatiotemporal relations in informational masking and the neural bases of auditory scene analysis.


Assuntos
Ruído , Mascaramento Perceptivo , Humanos , Envelhecimento , Encéfalo , Limiar Auditivo
6.
J Cogn Neurosci ; 34(8): 1355-1375, 2022 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-35640102

RESUMO

The neural basis of language has been studied for centuries, yet the networks critically involved in simply identifying or understanding a spoken word remain elusive. Several functional-anatomical models of critical neural substrates of receptive speech have been proposed, including (1) auditory-related regions in the left mid-posterior superior temporal lobe, (2) motor-related regions in the left frontal lobe (in normal and/or noisy conditions), (3) the left anterior superior temporal lobe, or (4) bilateral mid-posterior superior temporal areas. One difficulty in comparing these models is that they often focus on different aspects of the sound-to-meaning pathway and are supported by different types of stimuli and tasks. Two auditory tasks that are typically used in separate studies-syllable discrimination and word comprehension-often yield different conclusions. We assessed syllable discrimination (words and nonwords) and word comprehension (clear speech and with a noise masker) in 158 individuals with focal brain damage: left (n = 113) or right (n = 19) hemisphere stroke, left (n = 18) or right (n = 8) anterior temporal lobectomy, and 26 neurologically intact controls. Discrimination and comprehension tasks are doubly dissociable both behaviorally and neurologically. In support of a bilateral model, clear speech comprehension was near ceiling in 95% of left stroke cases and right temporal damage impaired syllable discrimination. Lesion-symptom mapping analyses for the syllable discrimination and noisy word comprehension tasks each implicated most of the left superior temporal gyrus. Comprehension but not discrimination tasks also implicated the left posterior middle temporal gyrus, whereas discrimination but not comprehension tasks also implicated more dorsal sensorimotor regions in posterior perisylvian cortex.


Assuntos
Percepção da Fala , Acidente Vascular Cerebral , Mapeamento Encefálico , Humanos , Imageamento por Ressonância Magnética , Neuroanatomia , Fala , Acidente Vascular Cerebral/patologia , Lobo Temporal/patologia
7.
Aphasiology ; 35(10): 1318-1333, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34898801

RESUMO

BACKGROUND: Attention deficits frequently accompany language impairments in aphasia. Most research on attention in aphasia focuses on selective attention measured by executive control tasks such as the color-word Stroop or Erickson flanker. This is despite ample evidence in neurotypical adults indicating the existence of multiple, distinct attention subtypes. Thus, there is a disconnect between the documented attention impairments in persons with aphasia (PWA) and the literature in neurotypical adults indicating that multiple attention components independently modulate an individual's interactions with the world. AIMS: This study aimed to use the well-studied Attention Network Test (ANT) to quantify three subtypes of attention (alerting, orienting, and executive control) in PWA and matched controls. It was hypothesized that significant effects of alerting, orienting, and executive control would be observed in both groups, however, the effects would be reduced in PWA compared to the neurotypical controls. It was additionally expected that alerting, orienting, and executive control would not be correlated with one another in either group. METHODS & PROCEDURES: Twenty-two PWA along with 20 age, gender, and education matched controls completed the ANT. Briefly, the ANT consists of a cued-flanker task where the cues provide information about when and where the flanker executive control task will be presented. The combination of cues and flanker targets embedded within the ANT provides measures of alerting, orienting, and executive control. Participants are expected to respond faster and more accurately to the flanker task when cued as to when and where the task will be presented. OUTCOMES & RESULTS: In line with previous work, the control group demonstrated significant effects of alerting, orienting, and executive control. However, we only find significant orienting and executive control effects in the aphasia group. Between-group differences were only identified within orienting attention: the control group benefitted more from the orienting cue than the aphasia group. Additionally, alerting, orienting, and executive control were not correlated in the control group, yet, a relationship between orienting and executive control was observed in the aphasia group. CONCLUSIONS: Overall, our findings demonstrate that attention differs between PWA and controls, and that the ANT may provide a more complete picture of attention in aphasia; this may be particularly important when characterizing the relationship between attention and language in aphasia.

8.
Front Hum Neurosci ; 15: 680933, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34759804

RESUMO

In post-stroke aphasia, language tasks recruit a combination of residual regions within the canonical language network, as well as regions outside of it in the left and right hemispheres. However, there is a lack of consensus as to how the neural resources engaged by language production and comprehension following a left hemisphere stroke differ from one another and from controls. The present meta-analysis used activation likelihood estimates to aggregate across 44 published fMRI and PET studies to characterize the functional reorganization patterns for expressive and receptive language processes in persons with chronic post-stroke aphasia (PWA). Our results in part replicate previous meta-analyses: we find that PWA activate residual regions within the left lateralized language network, regardless of task. Our results extend this work to show differential recruitment of the left and right hemispheres during language production and comprehension in PWA. First, we find that PWA engage left perilesional regions during language comprehension, and that the extent of this activation is likely driven by stimulus type and domain-general cognitive resources needed for task completion. In contrast to comprehension, language production was associated with activation of the right frontal and temporal cortices. Further analyses linked right hemisphere regions involved in motor speech planning for language production with successful naming in PWA, while unsuccessful naming was associated with the engagement of the right inferior frontal gyrus, a region often implicated in domain-general cognitive processes. While the within-group findings indicate that the engagement of the right hemisphere during language tasks in post-stroke aphasia differs for expressive vs. receptive tasks, the overall lack of major between-group differences between PWA and controls implies that PWA rely on similar cognitive-linguistic resources for language as controls. However, more studies are needed that report coordinates for PWA and controls completing the same tasks in order for future meta-analyses to characterize how aphasia affects the neural resources engaged during language, particularly for specific tasks and as a function of behavioral performance.

9.
J Speech Lang Hear Res ; 64(8): 3230-3241, 2021 08 09.
Artigo em Inglês | MEDLINE | ID: mdl-34284642

RESUMO

Purpose Sentence comprehension deficits are common following a left hemisphere stroke and have primarily been investigated under optimal listening conditions. However, ample work in neurotypical controls indicates that background noise affects sentence comprehension and the cognitive resources it engages. The purpose of this study was to examine how background noise affects sentence comprehension poststroke using both energetic and informational maskers. We further sought to identify whether sentence comprehension in noise abilities are related to poststroke cognitive abilities, specifically working memory and/or attentional control. Method Twenty persons with chronic left hemisphere stroke completed a sentence-picture matching task where they listened to sentences presented in three types of maskers: multispeakers, broadband noise, and silence (control condition). Working memory, attentional control, and hearing thresholds were also assessed. Results A repeated-measures analysis of variance identified participants to have the greatest difficulty with the multispeakers condition, followed by broadband noise and then silence. Regression analyses, after controlling for age and hearing ability, identified working memory as a significant predictor of listening engagement (i.e., mean reaction time) in broadband noise and multispeakers and attentional control as a significant predictor of informational masking effects (computed as a reaction time difference score where broadband noise is subtracted from multispeakers). Conclusions The results from this study indicate that background noise impacts sentence comprehension abilities poststroke and that these difficulties may arise due to deficits in the cognitive resources supporting sentence comprehension and not other factors such as age or hearing. These findings also highlight a relationship between working memory abilities and sentence comprehension in background noise. We further suggest that attentional control abilities contribute to sentence comprehension by supporting the additional demands associated with informational masking. Supplemental Material https://doi.org/10.23641/asha.14984511.


Assuntos
Percepção da Fala , Acidente Vascular Cerebral , Compreensão , Humanos , Memória de Curto Prazo , Ruído , Acidente Vascular Cerebral/complicações
10.
Brain Lang ; 203: 104756, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32032865

RESUMO

Non-canonical sentence comprehension impairments are well-documented in aphasia. Studies of neurotypical controls indicate that prosody can aid comprehension by facilitating attention towards critical pitch inflections and phrase boundaries. However, no studies have examined how prosody may engage specific cognitive and neural resources during non-canonical sentence comprehension in persons with left hemisphere damage. Experiment 1 examines the relationship between comprehension of non-canonical sentences spoken with typical and atypical prosody and several cognitive measures in 25 persons with chronic left hemisphere stroke and 20 matched controls. Experiment 2 explores the neural resources critical for non-canonical sentence comprehension with each prosody type using region-of-interest-based multiple regressions. Lower orienting attention abilities and greater inferior frontal and parietal damage predicted lower comprehension, but only for sentences with typical prosody. Our results suggest that typical sentence prosody may engage attention resources to support non-canonical sentence comprehension, and this relationship may be disrupted following left hemisphere stroke.


Assuntos
Afasia/fisiopatologia , Compreensão , Fonética , Percepção da Fala , Acidente Vascular Cerebral/fisiopatologia , Adulto , Afasia/diagnóstico por imagem , Atenção , Conectoma , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Acidente Vascular Cerebral/diagnóstico por imagem
11.
Audit Percept Cogn ; 3(4): 238-251, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-34671722

RESUMO

INTRODUCTION: Auditory attention is a critical foundation for successful language comprehension, yet is rarely studied in individuals with acquired language disorders. METHODS: We used an auditory version of the well-studied Attention Network Test to study alerting, orienting, and executive control in 28 persons with chronic stroke (PWS). We further sought to characterize the neurobiology of each auditory attention measure in our sample using exploratory lesion-symptom mapping analyses. RESULTS: PWS exhibited the expected executive control effect (i.e., decreased accuracy for incongruent compared to congruent trials), but their alerting and orienting attention were disrupted. PWS did not exhibit an alerting effect and they were actually distracted by the auditory spatial orienting cue compared to the control cue. Lesion-symptom mapping indicated that poorer alerting and orienting were associated with damage to the left retrolenticular part of the internal capsule (adjacent to the thalamus) and left posterior middle frontal gyrus (overlapping with the frontal eye fields), respectively. DISCUSSION: The behavioral findings correspond to our previous work investigating alerting and spatial orienting attention in persons with aphasia in the visual modality and suggest that auditory alerting and spatial orienting attention may be impaired in PWS due to stroke lesions damaging multi-modal attention resources.

12.
Neurocase ; 25(3-4): 106-117, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31241420

RESUMO

Both prosody and sentence structure (e.g., canonical versus non-canonical) affect sentence comprehension. However, few previous studies have examined a possible interaction between prosody and sentence structure. In adult controls we found a significant interaction: typical sentence prosody, versus list prosody, facilitated comprehension of only some sentence structures. In seven stroke patients, impaired attentional control was related to impaired comprehension with sentence prosody but not list prosody; impaired working memory was related to impaired comprehension with list prosody, but not sentence prosody. Thus, non-canonical sentence comprehension impairments in stroke patients may be modulated by prosody, based on a patient's cognitive abilities.


Assuntos
Compreensão/fisiologia , Percepção da Fala/fisiologia , Acidente Vascular Cerebral/psicologia , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Memória de Curto Prazo/fisiologia , Pessoa de Meia-Idade , Tempo de Reação/fisiologia , Adulto Jovem
13.
J Cogn Neurosci ; 30(2): 234-255, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29064339

RESUMO

Broca's area has long been implicated in sentence comprehension. Damage to this region is thought to be the central source of "agrammatic comprehension" in which performance is substantially worse (and near chance) on sentences with noncanonical word orders compared with canonical word order sentences (in English). This claim is supported by functional neuroimaging studies demonstrating greater activation in Broca's area for noncanonical versus canonical sentences. However, functional neuroimaging studies also have frequently implicated the anterior temporal lobe (ATL) in sentence processing more broadly, and recent lesion-symptom mapping studies have implicated the ATL and mid temporal regions in agrammatic comprehension. This study investigates these seemingly conflicting findings in 66 left-hemisphere patients with chronic focal cerebral damage. Patients completed two sentence comprehension measures, sentence-picture matching and plausibility judgments. Patients with damage including Broca's area (but excluding the temporal lobe; n = 11) on average did not exhibit the expected agrammatic comprehension pattern-for example, their performance was >80% on noncanonical sentences in the sentence-picture matching task. Patients with ATL damage ( n = 18) also did not exhibit an agrammatic comprehension pattern. Across our entire patient sample, the lesions of patients with agrammatic comprehension patterns in either task had maximal overlap in posterior superior temporal and inferior parietal regions. Using voxel-based lesion-symptom mapping, we find that lower performances on canonical and noncanonical sentences in each task are both associated with damage to a large left superior temporal-inferior parietal network including portions of the ATL, but not Broca's area. Notably, however, response bias in plausibility judgments was significantly associated with damage to inferior frontal cortex, including gray and white matter in Broca's area, suggesting that the contribution of Broca's area to sentence comprehension may be related to task-related cognitive demands.


Assuntos
Compreensão/fisiologia , Linguística , Lobo Temporal/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Julgamento/fisiologia , Masculino , Pessoa de Meia-Idade , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/lesões , Lobo Temporal/fisiopatologia , Percepção Visual/fisiologia
14.
Front Psychol ; 6: 1138, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26321976

RESUMO

The relationship between the neurobiology of speech and music has been investigated for more than a century. There remains no widespread agreement regarding how (or to what extent) music perception utilizes the neural circuitry that is engaged in speech processing, particularly at the cortical level. Prominent models such as Patel's Shared Syntactic Integration Resource Hypothesis (SSIRH) and Koelsch's neurocognitive model of music perception suggest a high degree of overlap, particularly in the frontal lobe, but also perhaps more distinct representations in the temporal lobe with hemispheric asymmetries. The present meta-analysis study used activation likelihood estimate analyses to identify the brain regions consistently activated for music as compared to speech across the functional neuroimaging (fMRI and PET) literature. Eighty music and 91 speech neuroimaging studies of healthy adult control subjects were analyzed. Peak activations reported in the music and speech studies were divided into four paradigm categories: passive listening, discrimination tasks, error/anomaly detection tasks and memory-related tasks. We then compared activation likelihood estimates within each category for music vs. speech, and each music condition with passive listening. We found that listening to music and to speech preferentially activate distinct temporo-parietal bilateral cortical networks. We also found music and speech to have shared resources in the left pars opercularis but speech-specific resources in the left pars triangularis. The extent to which music recruited speech-activated frontal resources was modulated by task. While there are certainly limitations to meta-analysis techniques particularly regarding sensitivity, this work suggests that the extent of shared resources between speech and music may be task-dependent and highlights the need to consider how task effects may be affecting conclusions regarding the neurobiology of speech and music.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...