Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Speech Lang Hear Res ; 67(5): 1548-1557, 2024 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-38557214

RESUMO

PURPOSE: Anomia, or word-finding difficulty, is a prevalent and persistent feature of aphasia, a neurogenic language disorder affecting millions of people in the United States. Anomia assessments are essential for measuring performance and monitoring outcomes in clinical settings. This study aims to evaluate the reliability of response time (RT) annotation based on spectrograms and assess the predictive utility of proxy RTs collected during computerized naming tests. METHOD: Archival data from 10 people with aphasia were used. Trained research assistants phonemically transcribed participants' responses, and RTs were generated from the onset of picture stimulus to the initial phoneme of the first complete attempt. RTs were measured in two ways: hand-generated RTs (from spectrograms) and proxy RTs (automatically extracted online). Interrater agreement was evaluated based on interclass correlation coefficients and generalizability theory tools including variance partitioning and the φ-coefficient. The predictive utility of proxy RTs was evaluated within a linear mixed-effects framework. RESULTS: RT annotation reliability showed near-perfect agreement across research assistants (φ-coefficient = .93), and the variance accounted for by raters was negligible. Furthermore, proxy RTs significantly and strongly predicted hand-annotated RTs (R2 = ~0.82), suggesting their utility as an alternative measure. CONCLUSIONS: The study confirms the reliability of RT annotation and demonstrates the predictive utility of proxy RTs in estimating RTs during computerized naming tests. Incorporating proxy RTs can enhance clinical assessments, providing additional information for cognitive measurement. Further research with larger samples and exploring the impact of using proxy RTs in different psychometric models could optimize clinical protocols and improve communication interventions for individuals with aphasia.


Assuntos
Anomia , Afasia , Tempo de Reação , Humanos , Feminino , Masculino , Afasia/diagnóstico , Afasia/psicologia , Pessoa de Meia-Idade , Idoso , Reprodutibilidade dos Testes , Anomia/diagnóstico , Testes de Linguagem , Adulto , Idoso de 80 Anos ou mais
2.
J Speech Lang Hear Res ; 66(4): 1351-1364, 2023 04 12.
Artigo em Inglês | MEDLINE | ID: mdl-37014997

RESUMO

PURPOSE: The purpose of this study was to evaluate whether a short-form computerized adaptive testing (CAT) version of the Philadelphia Naming Test (PNT) provides error profiles and model-based estimates of semantic and phonological processing that agree with the full test. METHOD: Twenty-four persons with aphasia took the PNT-CAT and the full version of the PNT (hereinafter referred to as the "full PNT") at least 2 weeks apart. The PNT-CAT proceeded in two stages: (a) the PNT-CAT30, in which 30 items were selected to match the evolving ability estimate with the goal of producing a 50% error rate, and (b) the PNT-CAT60, in which an additional 30 items were selected to produce a 75% error rate. Agreement was evaluated in terms of the root-mean-square deviation of the response-type proportions and, for individual response types, in terms of agreement coefficients and bias. We also evaluated agreement and bias for estimates of semantic and phonological processing derived from the semantic-phonological interactive two-step model (SP model) of word production. RESULTS: The results suggested that agreement was poorest for semantic, formal, mixed, and unrelated errors, all of which were underestimated by the short forms. Better agreement was observed for correct and nonword responses. SP model weights estimated by the short forms demonstrated no substantial bias but generally inadequate agreement with the full PNT, which itself showed acceptable test-retest reliability for SP model weights and all response types except for formal errors. DISCUSSION: Results suggest that the PNT-CAT30 and the PNT-CAT60 are generally inadequate for generating naming error profiles or model-derived estimates of semantic and phonological processing ability. Post hoc analyses suggested that increasing the number of stimuli available in the CAT item bank may improve the utility of adaptive short forms for generating error profiles, but the underlying theory also suggests that there are limitations to this approach based on a unidimensional measurement model. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.22320814.


Assuntos
Afasia , Humanos , Afasia/diagnóstico , Linguística , Reprodutibilidade dos Testes , Semântica
3.
J Speech Lang Hear Res ; 66(5): 1718-1739, 2023 05 09.
Artigo em Inglês | MEDLINE | ID: mdl-37000934

RESUMO

PURPOSE: Item response theory (IRT) is a modern psychometric framework with several advantageous properties as compared with classical test theory. IRT has been successfully used to model performance on anomia tests in individuals with aphasia; however, all efforts to date have focused on noun production accuracy. The purpose of this study is to evaluate whether the Verb Naming Test (VNT), a prominent test of action naming, can be successfully modeled under IRT and evaluate its reliability. METHOD: We used responses on the VNT from 107 individuals with chronic aphasia from AphasiaBank. Unidimensionality and local independence, two assumptions prerequisite to IRT modeling, were evaluated using factor analysis and Yen's Q 3 statistic (Yen, 1984), respectively. The assumption of equal discrimination among test items was evaluated statistically via nested model comparisons and practically by using correlations of resulting IRT-derived scores. Finally, internal consistency, marginal and empirical reliability, and conditional reliability were evaluated. RESULTS: The VNT was found to be sufficiently unidimensional with the majority of item pairs demonstrating adequate local independence. An IRT model in which item discriminations are constrained to be equal demonstrated fit equivalent to a model in which unique discrimination parameters were estimated for each item. All forms of reliability were strong across the majority of IRT ability estimates. CONCLUSIONS: Modeling the VNT using IRT is feasible, yielding ability estimates that are both informative and reliable. Future efforts are needed to quantify the validity of the VNT under IRT and determine the extent to which it measures the same construct as other anomia tests. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.22329235.


Assuntos
Anomia , Humanos , Anomia/diagnóstico , Reprodutibilidade dos Testes , Análise Fatorial , Psicometria
4.
J Speech Lang Hear Res ; 66(6): 1908-1927, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-36542852

RESUMO

PURPOSE: Small-N studies are the dominant study design supporting evidence-based interventions in communication science and disorders, including treatments for aphasia and related disorders. However, there is little guidance for conducting reproducible analyses or selecting appropriate effect sizes in small-N studies, which has implications for scientific review, rigor, and replication. This tutorial aims to (a) demonstrate how to conduct reproducible analyses using effect sizes common to research in aphasia and related disorders and (b) provide a conceptual discussion to improve the reader's understanding of these effect sizes. METHOD: We provide a tutorial on reproducible analyses of small-N designs in the statistical programming language R using published data from Wambaugh et al. (2017). In addition, we discuss the strengths, weaknesses, reporting requirements, and impact of experimental design decisions on effect sizes common to this body of research. RESULTS: Reproducible code demonstrates implementation and comparison of within-case standardized mean difference, proportion of maximal gain, tau-U, and frequentist and Bayesian mixed-effects models. Data, code, and an interactive web application are available as a resource for researchers, clinicians, and students. CONCLUSIONS: Pursuing reproducible research is key to promoting transparency in small-N treatment research. Researchers and clinicians must understand the properties of common effect size measures to make informed decisions in order to select ideal effect size measures and act as informed consumers of small-N studies. Together, a commitment to reproducibility and a keen understanding of effect sizes can improve the scientific rigor and synthesis of the evidence supporting clinical services in aphasiology and in communication sciences and disorders more broadly. Supplemental Material and Open Science Form: https://doi.org/10.23641/asha.21699476.


Assuntos
Afasia , Humanos , Reprodutibilidade dos Testes , Teorema de Bayes , Afasia/terapia , Comunicação , Estudantes
5.
J Speech Lang Hear Res ; 64(11): 4308-4328, 2021 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-34694908

RESUMO

Purpose This meta-analysis synthesizes published studies using "treatment of underlying forms" (TUF) for sentence-level deficits in people with aphasia (PWA). The study aims were to examine group-level evidence for TUF efficacy, to characterize the effects of treatment-related variables (sentence structural family and complexity; treatment dose) in relation to the Complexity Account of Treatment Efficacy (CATE) hypothesis, and to examine the effects of person-level variables (aphasia severity, sentence comprehension impairment, and time postonset of aphasia) on TUF response. Method Data from 13 single-subject, multiple-baseline TUF studies, including 46 PWA, were analyzed. Bayesian generalized linear mixed-effects interrupted time series models were used to assess the effect of treatment-related variables on probe accuracy during baseline and treatment. The moderating influence of person-level variables on TUF response was also investigated. Results The results provide group-level evidence for TUF efficacy demonstrating increased probe accuracy during treatment compared with baseline phases. Greater amounts of TUF were associated with larger increases in accuracy, with greater gains for treated than untreated sentences. The findings revealed generalization effects for sentences that were of the same family but less complex than treated sentences. Aphasia severity may moderate TUF response, with people with milder aphasia demonstrating greater gains compared with people with more severe aphasia. Sentence comprehension performance did not moderate TUF response. Greater time postonset of aphasia was associated with smaller improvements for treated sentences but not for untreated sentences. Conclusions Our results provide generalizable group-level evidence of TUF efficacy. Treatment and generalization responses were consistent with the CATE hypothesis. Model results also identified person-level moderators of TUF (aphasia severity, time postonset of aphasia) and preliminary estimates of the effects of varying amounts of TUF for treated and untreated sentences. Taken together, these findings add to the TUF evidence and may guide future TUF treatment-candidate selection. Supplemental Material https://doi.org/10.23641/asha.16828630.


Assuntos
Afasia , Afasia/terapia , Teorema de Bayes , Compreensão , Humanos , Idioma , Testes de Linguagem
6.
Semin Speech Lang ; 42(3): 180-191, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34261162

RESUMO

Anomia assessment is a fundamental component of clinical practice and research inquiries involving individuals with aphasia, and confrontation naming tasks are among the most commonly used tools for quantifying anomia severity. While currently available confrontation naming tests possess many ideal properties, they are ultimately limited by the overarching psychometric framework they were developed within. Here, we discuss the challenges inherent to confrontation naming tests and present a modern alternative to test development called item response theory (IRT). Key concepts of IRT approaches are reviewed in relation to their relevance to aphasiology, highlighting the ability of IRT to create flexible and efficient tests that yield precise measurements of anomia severity. Empirical evidence from our research group on the application of IRT methods to a commonly used confrontation naming test is discussed, along with future avenues for test development.


Assuntos
Anomia , Afasia , Afasia/diagnóstico , Computadores , Humanos
7.
J Speech Lang Hear Res ; 63(1): 163-172, 2020 01 22.
Artigo em Inglês | MEDLINE | ID: mdl-31851861

RESUMO

Purpose The purpose of this study was to verify the equivalence of 2 alternate test forms with nonoverlapping content generated by an item response theory (IRT)-based computer-adaptive test (CAT). The Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996)was utilized as an item bank in a prospective, independent sample of persons with aphasia. Method Two alternate CAT short forms of the PNT were administered to a sample of 25 persons with aphasia who were at least 6 months postonset and received no treatment for 2 weeks before or during the study. The 1st session included administration of a 30-item PNT-CAT, and the 2nd session, conducted approximately 2 weeks later, included a variable-length PNT-CAT that excluded items administered in the 1st session and terminated when the modeled precision of the ability estimate was equal to or greater than the value obtained in the 1st session. The ability estimates were analyzed in a Bayesian framework. Results The 2 test versions correlated highly (r = .89) and obtained means and standard deviations that were not credibly different from one another. The correlation and error variance between the 2 test versions were well predicted by the IRT measurement model. Discussion The results suggest that IRT-based CAT alternate forms may be productively used in the assessment of anomia. IRT methods offer advantages for the efficient and sensitive measurement of change over time. Future work should consider the potential impact of differential item functioning due to person factors and intervention-specific effects, as well as expanding the item bank to maximize the clinical utility of the test. Supplemental Material https://doi.org/10.23641/asha.11368040.


Assuntos
Anomia/diagnóstico , Afasia/diagnóstico , Diagnóstico por Computador/normas , Testes de Linguagem/normas , Idoso , Teorema de Bayes , Diagnóstico por Computador/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Psicometria , Reprodutibilidade dos Testes , Inquéritos e Questionários
8.
Aphasiology ; 33(6): 689-709, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31462841

RESUMO

BACKGROUND: Item response theory (IRT; Lord & Novick, 1968) is a psychometric framework that can be used to model the likelihood that an individual will respond correctly to an item. Using archival data (Mirman et al., 2010), Fergadiotis, Kellough, and Hula (2015) estimated difficulty parameters for the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) using the 1-parameter logistic IRT model. Although the use of IRT in test development is advantageous, its reliance on sample sizes exceeding 200 participants make it difficult to implement in aphasiology. Therefore, alternate means of estimating the item difficulty of confrontation naming test items warrant investigation. In a preliminary study aimed at automatic item calibration, Swiderski, Fergadiotis, and Hula (2016) regressed the difficulty parameters from the PNT on word length, age of acquisition (Kuperman et al., 2012), lexical frequency as quantified by the Log10CD index (Brysbaert & New, 2009), and naming latency (Székely et al., 2003). Although this model successfully explained a substantial proportion of variance in the PNT difficulty parameters, a substantial proportion (20%) of the response time data were missing. Further, only 39% of the picture stimuli from Székely and colleagues (2003) were identical to those on the PNT. Given that the IRT sample size requirements limit traditional calibration approaches in aphasiology and that the initial attempts in predicting IRT difficulty parameters in our pilot study were based on incomplete response time data this study has two specific aims. AIMS: To estimate naming latencies for the 175 items on the PNT, and assess the utility of psycholinguistic variables and naming latencies for predicting item difficulty. METHODS AND PROCEDURES: Using a speeded picture naming task we estimated mean naming latencies for the 175 items of the Philadelphia Naming test in 44 cognitively healthy adults. We then re-estimated the model reported by Swiderski et al (2016) with the new naming latency data. RESULTS: The predictor variables described above accounted for a substantial proportion of the variance in the item difficulty parameters (Adj. R 2 = .692). CONCLUSIONS: In this study we demonstrated that word length, age of acquisition, lexical frequency, and naming latency from neurotypical young adults usefully predict picture naming item difficulty in people with aphasia. These variables are readily available or easily obtained and the regression model reported may be useful for estimating confrontation naming item difficulty without the need for collection of response data from large samples of people with aphasia.

9.
J Speech Lang Hear Res ; 62(6): 1724-1738, 2019 06 19.
Artigo em Inglês | MEDLINE | ID: mdl-31158037

RESUMO

Purpose In this study, we investigated the agreement between the 175-item Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996 ) and a 30-item computer adaptive PNT (PNT-CAT; Fergadiotis, Kellough, & Hula, 2015 ; Hula, Kellough, & Fergadiotis, 2015 ) created using item response theory (IRT) methods. Method The full PNT and the PNT-CAT were administered to 47 participants with aphasia in counterbalanced order. Latent trait-naming ability estimates for the 2 PNT versions were analyzed in a Bayesian framework, and the agreement between them was evaluated using correlation and measures of constant, variable, and total error. We also evaluated the extent to which individual pairwise differences were credibly greater than 0 and whether the IRT measurement model provided an adequate indication of the precision of individual score estimates. Results The agreement between the PNT and the PNT-CAT was strong, as indicated by high correlation ( r = .95, 95% CI [.92, .97]), negligible bias, and low variable and total error. The number of statistically robust pairwise score differences did not credibly exceed the Type I error rate, and the precision of individual score estimates was reasonably well predicted by the IRT model. Discussion The strong agreement between the full PNT and the PNT-CAT suggests that the latter is a suitable measurement of anomia in group studies. The relatively robust estimates of score precision also suggest that the PNT-CAT can be useful for the clinical assessment of anomia in individual cases. Finally, the IRT methods used to construct the PNT-CAT provide a framework for additional development to further reduce measurement error. Supplemental Material https://doi.org/10.23641/asha.8202176.


Assuntos
Anomia/diagnóstico , Afasia/diagnóstico , Diagnóstico por Computador/métodos , Testes de Linguagem/normas , Adulto , Idoso , Idoso de 80 Anos ou mais , Teorema de Bayes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...