Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters











Publication year range
1.
J Speech Lang Hear Res ; 66(4): 1351-1364, 2023 04 12.
Article in English | MEDLINE | ID: mdl-37014997

ABSTRACT

PURPOSE: The purpose of this study was to evaluate whether a short-form computerized adaptive testing (CAT) version of the Philadelphia Naming Test (PNT) provides error profiles and model-based estimates of semantic and phonological processing that agree with the full test. METHOD: Twenty-four persons with aphasia took the PNT-CAT and the full version of the PNT (hereinafter referred to as the "full PNT") at least 2 weeks apart. The PNT-CAT proceeded in two stages: (a) the PNT-CAT30, in which 30 items were selected to match the evolving ability estimate with the goal of producing a 50% error rate, and (b) the PNT-CAT60, in which an additional 30 items were selected to produce a 75% error rate. Agreement was evaluated in terms of the root-mean-square deviation of the response-type proportions and, for individual response types, in terms of agreement coefficients and bias. We also evaluated agreement and bias for estimates of semantic and phonological processing derived from the semantic-phonological interactive two-step model (SP model) of word production. RESULTS: The results suggested that agreement was poorest for semantic, formal, mixed, and unrelated errors, all of which were underestimated by the short forms. Better agreement was observed for correct and nonword responses. SP model weights estimated by the short forms demonstrated no substantial bias but generally inadequate agreement with the full PNT, which itself showed acceptable test-retest reliability for SP model weights and all response types except for formal errors. DISCUSSION: Results suggest that the PNT-CAT30 and the PNT-CAT60 are generally inadequate for generating naming error profiles or model-derived estimates of semantic and phonological processing ability. Post hoc analyses suggested that increasing the number of stimuli available in the CAT item bank may improve the utility of adaptive short forms for generating error profiles, but the underlying theory also suggests that there are limitations to this approach based on a unidimensional measurement model. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.22320814.


Subject(s)
Aphasia , Humans , Aphasia/diagnosis , Linguistics , Reproducibility of Results , Semantics
2.
J Speech Lang Hear Res ; 66(6): 1908-1927, 2023 06 20.
Article in English | MEDLINE | ID: mdl-36542852

ABSTRACT

PURPOSE: Small-N studies are the dominant study design supporting evidence-based interventions in communication science and disorders, including treatments for aphasia and related disorders. However, there is little guidance for conducting reproducible analyses or selecting appropriate effect sizes in small-N studies, which has implications for scientific review, rigor, and replication. This tutorial aims to (a) demonstrate how to conduct reproducible analyses using effect sizes common to research in aphasia and related disorders and (b) provide a conceptual discussion to improve the reader's understanding of these effect sizes. METHOD: We provide a tutorial on reproducible analyses of small-N designs in the statistical programming language R using published data from Wambaugh et al. (2017). In addition, we discuss the strengths, weaknesses, reporting requirements, and impact of experimental design decisions on effect sizes common to this body of research. RESULTS: Reproducible code demonstrates implementation and comparison of within-case standardized mean difference, proportion of maximal gain, tau-U, and frequentist and Bayesian mixed-effects models. Data, code, and an interactive web application are available as a resource for researchers, clinicians, and students. CONCLUSIONS: Pursuing reproducible research is key to promoting transparency in small-N treatment research. Researchers and clinicians must understand the properties of common effect size measures to make informed decisions in order to select ideal effect size measures and act as informed consumers of small-N studies. Together, a commitment to reproducibility and a keen understanding of effect sizes can improve the scientific rigor and synthesis of the evidence supporting clinical services in aphasiology and in communication sciences and disorders more broadly. Supplemental Material and Open Science Form: https://doi.org/10.23641/asha.21699476.


Subject(s)
Aphasia , Humans , Reproducibility of Results , Bayes Theorem , Aphasia/therapy , Communication , Students
3.
Am J Speech Lang Pathol ; 31(5S): 2366-2377, 2022 10 25.
Article in English | MEDLINE | ID: mdl-35290089

ABSTRACT

PURPOSE: Specifying the active ingredients in aphasia interventions can inform treatment theory and improve clinical implementation. This secondary analysis examined three practice-related predictors of treatment response in semantic feature verification (SFV) treatment. We hypothesized that (a) successful feature verification practice would be associated with naming outcomes if SFV operates similarly to standard feature generation semantic feature analysis and (b) successful retrieval practice would be associated with naming outcomes for treated, but not semantically related, untreated words if SFV operates via a retrieval practice-oriented lexical activation mechanism. METHOD: Item-level data from nine participants with poststroke aphasia who received SFV treatment reported in the work of Evans, Cavanaugh, Quique, et al. (2021) were analyzed using Bayesian generalized linear mixed-effects models. Models evaluated whether performance on three treatment components (facilitated retrieval, feature verification, and effortful retrieval) moderated treatment response for treated and semantically related, untreated words. RESULTS: There was no evidence for or against a relationship between successful feature verification practice and treatment response. In contrast, there was a robust relationship between the two retrieval practice components and treatment response for treated words only. DISCUSSION: Findings were consistent with the second hypothesis: Retrieval practice, but not feature verification practice, appears to be a practice-related predictor of treatment response in SFV. However, treatment components are likely interdependent, and feature verification may still be an active ingredient in SFV. Further research is needed to evaluate the causal role of treatment components on treatment outcomes in aphasia.


Subject(s)
Aphasia , Humans , Bayes Theorem , Aphasia/diagnosis , Aphasia/etiology , Aphasia/therapy , Semantics , Treatment Outcome
4.
J Speech Lang Hear Res ; 64(11): 4308-4328, 2021 11 08.
Article in English | MEDLINE | ID: mdl-34694908

ABSTRACT

Purpose This meta-analysis synthesizes published studies using "treatment of underlying forms" (TUF) for sentence-level deficits in people with aphasia (PWA). The study aims were to examine group-level evidence for TUF efficacy, to characterize the effects of treatment-related variables (sentence structural family and complexity; treatment dose) in relation to the Complexity Account of Treatment Efficacy (CATE) hypothesis, and to examine the effects of person-level variables (aphasia severity, sentence comprehension impairment, and time postonset of aphasia) on TUF response. Method Data from 13 single-subject, multiple-baseline TUF studies, including 46 PWA, were analyzed. Bayesian generalized linear mixed-effects interrupted time series models were used to assess the effect of treatment-related variables on probe accuracy during baseline and treatment. The moderating influence of person-level variables on TUF response was also investigated. Results The results provide group-level evidence for TUF efficacy demonstrating increased probe accuracy during treatment compared with baseline phases. Greater amounts of TUF were associated with larger increases in accuracy, with greater gains for treated than untreated sentences. The findings revealed generalization effects for sentences that were of the same family but less complex than treated sentences. Aphasia severity may moderate TUF response, with people with milder aphasia demonstrating greater gains compared with people with more severe aphasia. Sentence comprehension performance did not moderate TUF response. Greater time postonset of aphasia was associated with smaller improvements for treated sentences but not for untreated sentences. Conclusions Our results provide generalizable group-level evidence of TUF efficacy. Treatment and generalization responses were consistent with the CATE hypothesis. Model results also identified person-level moderators of TUF (aphasia severity, time postonset of aphasia) and preliminary estimates of the effects of varying amounts of TUF for treated and untreated sentences. Taken together, these findings add to the TUF evidence and may guide future TUF treatment-candidate selection. Supplemental Material https://doi.org/10.23641/asha.16828630.


Subject(s)
Aphasia , Aphasia/therapy , Bayes Theorem , Comprehension , Humans , Language , Language Tests
5.
J Speech Lang Hear Res ; 64(8): 3100-3126, 2021 08 09.
Article in English | MEDLINE | ID: mdl-34255979

ABSTRACT

Purpose The purpose of this study was to develop and pilot a novel treatment framework called BEARS (Balancing Effort, Accuracy, and Response Speed). People with aphasia (PWA) have been shown to maladaptively balance speed and accuracy during language tasks. BEARS is designed to train PWA to balance speed-accuracy trade-offs and improve system calibration (i.e., to adaptively match system use with its current capability), which was hypothesized to improve treatment outcomes by maximizing retrieval practice and minimizing error learning. In this study, BEARS was applied in the context of a semantically oriented anomia treatment based on semantic feature verification (SFV). Method Nine PWA received 25 hr of treatment in a multiple-baseline single-case series design. BEARS + SFV combined computer-based SFV with clinician-provided BEARS metacognitive training. Naming probe accuracy, efficiency, and proportion of "pass" responses on inaccurate trials were analyzed using Bayesian generalized linear mixed-effects models. Generalization to discourse and correlations between practice efficiency and treatment outcomes were also assessed. Results Participants improved on naming probe accuracy and efficiency of treated and untreated items, although untreated item gains could not be distinguished from the effects of repeated exposure. There were no improvements on discourse performance, but participants demonstrated improved system calibration based on their performance on inaccurate treatment trials, with an increasing proportion of "pass" responses compared to paraphasia or timeout nonresponses. In addition, levels of practice efficiency during treatment were positively correlated with treatment outcomes, suggesting that improved practice efficiency promoted greater treatment generalization and improved naming efficiency. Conclusions BEARS is a promising, theoretically motivated treatment framework for addressing the interplay between effort, accuracy, and processing speed in aphasia. This study establishes the feasibility of BEARS + SFV and provides preliminary evidence for its efficacy. This study highlights the importance of considering processing efficiency in anomia treatment, in addition to performance accuracy. Supplemental Material https://doi.org/10.23641/asha.14935812.


Subject(s)
Ursidae , Animals , Anomia/therapy , Bayes Theorem , Humans , Language Therapy , Reaction Time , Semantics , Treatment Outcome
6.
Am J Speech Lang Pathol ; 30(1S): 344-358, 2021 02 11.
Article in English | MEDLINE | ID: mdl-32571091

ABSTRACT

Purpose Semantic feature analysis (SFA) is a naming treatment found to improve naming performance for both treated and semantically related untreated words in aphasia. A crucial treatment component is the requirement that patients generate semantic features of treated items. This article examined the role feature generation plays in treatment response to SFA in several ways: It attempted to replicate preliminary findings from Gravier et al. (2018), which found feature generation predicted treatment-related gains for both trained and untrained words. It examined whether feature diversity or the number of features generated in specific categories differentially affected SFA treatment outcomes. Method SFA was administered to 44 participants with chronic aphasia daily for 4 weeks. Treatment was administered to multiple lists sequentially in a multiple-baseline design. Participant-generated features were captured during treatment and coded in terms of feature category, total average number of features generated per trial, and total number of unique features generated per item. Item-level naming accuracy was analyzed using logistic mixed-effects regression models. Results Producing more participant-generated features was found to improve treatment response for trained but not untrained items in SFA, in contrast to Gravier et al. (2018). There was no effect of participant-generated feature diversity or any differential effect of feature category on SFA treatment outcomes. Conclusions Patient-generated features remain a key predictor of direct training effects and overall treatment response in SFA. Aphasia severity was also a significant predictor of treatment outcomes. Future work should focus on identifying potential nonresponders to therapy and explore treatment modifications to improve treatment outcomes for these individuals. Supplemental Material https://doi.org/10.23641/asha.12462596.


Subject(s)
Aphasia , Semantics , Aphasia/diagnosis , Aphasia/therapy , Generalization, Psychological , Humans , Language Therapy , Treatment Outcome
7.
J Speech Lang Hear Res ; 63(2): 599-614, 2020 02 26.
Article in English | MEDLINE | ID: mdl-32073336

ABSTRACT

Purpose Aphasia is a language disorder caused by acquired brain injury, which generally involves difficulty naming objects. Naming ability is assessed by measuring picture naming, and models of naming performance have mostly focused on accuracy and excluded valuable response time (RT) information. Previous approaches have therefore ignored the issue of processing efficiency, defined here in terms of optimal RT cutoff, that is, the shortest deadline at which individual people with aphasia produce their best possible naming accuracy performance. The goals of this study were therefore to (a) develop a novel model of aphasia picture naming that could accurately account for RT distributions across response types; (b) use this model to estimate the optimal RT cutoff for individual people with aphasia; and (c) explore the relationships between optimal RT cutoff, accuracy, naming ability, and aphasia severity. Method A total of 4,021 naming trials across 10 people with aphasia were scored for accuracy and RT onset. Data were fit using a novel ex-Gaussian multinomial RT model, which was then used to characterize individual optimal RT cutoffs. Results Overall, the model fitted the empirical data well and provided reliable individual estimates of optimal RT cutoff in picture naming. Optimal cutoffs ranged between approximately 5 and 10 s, which has important implications for assessment and treatment. There was no direct relationship between aphasia severity, naming RT, and optimal RT cutoff. Conclusion The multinomial ex-Gaussian modeling approach appears to be a promising and straightforward way to estimate optimal RT cutoffs in picture naming in aphasia. Limitations and future directions are discussed.


Subject(s)
Aphasia/psychology , Language Tests/standards , Models, Statistical , Reaction Time , Aged , Anomia/psychology , Female , Humans , Male , Middle Aged , Normal Distribution , Reference Standards
8.
J Speech Lang Hear Res ; 63(1): 163-172, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31851861

ABSTRACT

Purpose The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)-based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996)was utilized as an item bank in a prospective, independent sample of persons with aphasia. Method Two alternate CAT short forms of the PNT were administered to a sample of 25 persons with aphasia who were at least 6 months postonset and received no treatment for 2 weeks before or during the study. The 1st session included administration of a 30-item PNT-CAT, and the 2nd session, conducted approximately 2 weeks later, included a variable-length PNT-CAT that excluded items administered in the 1st session and terminated when the modeled precision of the ability estimate was equal to or greater than the value obtained in the 1st session. The ability estimates were analyzed in a Bayesian framework. Results The 2 test versions correlated highly (r = .89) and obtained means and standard deviations that were not credibly different from one another. The correlation and error variance between the 2 test versions were well predicted by the IRT measurement model. Discussion The results suggest that IRT-based CAT alternate forms may be productively used in the assessment of anomia. IRT methods offer advantages for the efficient and sensitive measurement of change over time. Future work should consider the potential impact of differential item functioning due to person factors and intervention-specific effects, as well as expanding the item bank to maximize the clinical utility of the test. Supplemental Material https://doi.org/10.23641/asha.11368040.


Subject(s)
Anomia/diagnosis , Aphasia/diagnosis , Diagnosis, Computer-Assisted/standards , Language Tests/standards , Aged , Bayes Theorem , Diagnosis, Computer-Assisted/methods , Female , Humans , Male , Middle Aged , Prospective Studies , Psychometrics , Reproducibility of Results , Surveys and Questionnaires
9.
Am J Speech Lang Pathol ; 28(1S): 259-277, 2019 03 11.
Article in English | MEDLINE | ID: mdl-30208413

ABSTRACT

Purpose After stroke, how well do people with aphasia (PWA) adapt to the altered functioning of their language system? When completing a language-dependent task, how well do PWA balance speed and accuracy when the goal is to respond both as quickly and accurately as possible? The current work investigates adaptation theory ( Kolk & Heeschen, 1990 ) in the context of speed-accuracy trade-offs in a lexical decision task. PWA were predicted to set less beneficial speed-accuracy trade-offs than matched controls, and at least some PWA were predicted to present with adaptation deficits, with impaired accuracy or response times attributable to speed-accuracy trade-offs. Method The study used the diffusion model ( Ratcliff, 1978 ), a computational model of response time for simple 2-choice tasks. Parameters of the model can be used to distinguish basic processing efficiency from the overall level of caution in setting response thresholds and were used here to characterize speed-accuracy trade-offs in 20 PWA and matched controls during a lexical decision task. Results Models showed that PWA and matched control groups did not differ overall in how they set response thresholds for speed-accuracy trade-offs. However, case series analyses showed that 40% of the PWA group displayed the predicted adaptation deficits, with impaired accuracy or response time performance directly attributable to overly cautious or overly incautious response thresholds. Conclusions Maladaptive speed-accuracy trade-offs appear to be present in some PWA during lexical decision, leading to adaptation deficits in performance. These adaptation deficits are potentially treatable, and clinical implications and next steps for translational research are discussed.


Subject(s)
Adaptation, Psychological/physiology , Aphasia/psychology , Communication , Adult , Aged , Aged, 80 and over , Aphasia/etiology , Case-Control Studies , Decision Making/physiology , Educational Status , Female , Humans , Language Tests , Male , Middle Aged , Models, Psychological , Reaction Time/physiology , Semantics , Sensory Thresholds/physiology , Stroke/complications
10.
Am J Speech Lang Pathol ; 27(1S): 438-453, 2018 03 01.
Article in English | MEDLINE | ID: mdl-29497754

ABSTRACT

Purpose: This study investigated the predictive value of practice-related variables-number of treatment trials delivered, total treatment time, average number of trials per hour, and average number of participant-generated features per trial-in response to semantic feature analysis (SFA) treatment. Method: SFA was administered to 17 participants with chronic aphasia daily for 4 weeks. Individualized treatment and semantically related probe lists were generated from items that participants were unable to name consistently during baseline testing. Treatment was administered to each list sequentially in a multiple-baseline design. Naming accuracy for treated and untreated items was obtained at study entry, exit, and 1-month follow-up. Results: Item-level naming accuracy was analyzed using logistic mixed-effect regression models. The average number of features generated per trial positively predicted naming accuracy for both treated and untreated items, at exit and follow-up. In contrast, total treatment time and average trials per hour did not significantly predict treatment response. The predictive effect of number of treatment trials on naming accuracy trended toward significance at exit, although this relationship held for treated items only. Conclusions: These results suggest that the number of patient-generated features may be more strongly associated with SFA-related naming outcomes, particularly generalization and maintenance, than other practice-related variables. Supplemental Materials: https://doi.org/10.23641/asha.5734113.


Subject(s)
Aphasia/therapy , Comprehension , Semantics , Speech-Language Pathology/methods , Adult , Aged , Aphasia/diagnosis , Aphasia/psychology , Female , Humans , Language Tests , Male , Middle Aged , Recovery of Function , Severity of Illness Index , Time Factors , Treatment Outcome
SELECTION OF CITATIONS
SEARCH DETAIL