Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 3 de 3
1.
J Speech Lang Hear Res ; 66(3): 966-986, 2023 03 07.
Article En | MEDLINE | ID: mdl-36791263

PURPOSE: A preliminary version of a paraphasia classification algorithm (henceforth called ParAlg) has previously been shown to be a viable method for coding picture naming errors. The purpose of this study is to present an updated version of ParAlg, which uses multinomial classification, and comprehensively evaluate its performance when using two different forms of transcribed input. METHOD: A subset of 11,999 archival responses produced on the Philadelphia Naming Test were classified into six cardinal paraphasia types using ParAlg under two transcription configurations: (a) using phonemic transcriptions for responses exclusively (phonemic-only) and (b) using phonemic transcriptions for nonlexical responses and orthographic transcriptions for lexical responses (orthographic-lexical). Agreement was quantified by comparing ParAlg-generated paraphasia codes between configurations and relative to human-annotated codes using four metrics (positive predictive value, sensitivity, specificity, and F1 score). An item-level qualitative analysis of misclassifications under the best performing configuration was also completed to identify the source and nature of coding discrepancies. RESULTS: Agreement between ParAlg-generated and human-annotated codes was high, although the orthographic-lexical configuration outperformed phonemic-only (weighted-average F1 scores of .78 and .87, respectively). A qualitative analysis of the orthographic-lexical configuration revealed a mix of human- and ParAlg-related misclassifications, the former of which were related primarily to phonological similarity judgments whereas the latter were due to semantic similarity assignment. CONCLUSIONS: ParAlg is an accurate and efficient alternative to manual scoring of paraphasias, particularly when lexical responses are orthographically transcribed. With further development, it has the potential to be a useful software application for anomia assessment. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.22087763.


Aphasia , Humans , Anomia , Semantics , Neuropsychological Tests , Algorithms
2.
Am J Speech Lang Pathol ; 30(1S): 491-502, 2021 02 11.
Article En | MEDLINE | ID: mdl-32585117

Purpose The heterogeneous nature of measures, methods, and analyses reported in the aphasia spoken discourse literature precludes comparison of outcomes across studies (e.g., meta-analyses) and inhibits replication. Furthermore, funding and time constraints significantly hinder collecting test-retest data on spoken discourse outcomes. This research note describes the development and structure of a working group, designed to address major gaps in the spoken discourse aphasia literature, including a lack of standardization in methodology, analysis, and reporting, as well as nominal data regarding the psychometric properties of spoken discourse outcomes. Method The initial initiatives for this working group are to (a) propose recommendations regarding standardization of spoken discourse collection, analysis, and reporting in aphasia, based on the results of an international survey and a systematic literature review and (b) create a database of test-retest spoken discourse data from individuals with and without aphasia. The survey of spoken discourse collection, analysis, and interpretation procedures was distributed to clinicians and researchers involved in aphasia assessment and rehabilitation from September to November 2019. We will publish survey results and recommend standards for collecting, analyzing, and reporting spoken discourse in aphasia. A multisite endeavor to collect test-retest spoken discourse data from individuals with and without aphasia will be initiated. This test-retest information will be contributed to a central site for transcription and analysis, and data will be subsequently openly curated. Conclusion The goal of the working group is to create recommendations for field-wide standards in methods, analysis, and reporting of spoken discourse outcomes, as has been done across other related disciplines (e.g., Consolidated Standards of Reporting Trials, Enhancing the Quality and Transparency of Health Research, Committee on Best Practice in Data Analysis and Sharing). Additionally, the creation of a database through our multisite collaboration will allow the identification of psychometrically sound outcome measures and norms that can be used by clinicians and researchers to assess spoken discourse abilities in aphasia.


Aphasia , Aphasia/diagnosis , Aphasia/therapy , Humans , Psychometrics , Surveys and Questionnaires
3.
Proc Conf ; 2019(RepEval): 52-62, 2019 Jun.
Article En | MEDLINE | ID: mdl-37426745

In clinical assessment of people with aphasia, impairment in the ability to recall and produce words for objects (anomia) is assessed using a confrontation naming task, where a target stimulus is viewed and a corresponding label is spoken by the participant. Vector space word embedding models have had inital results in assessing semantic similarity of target-production pairs in order to automate scoring of this task; however, the resulting models are also highly dependent upon training parameters. To select an optimal family of models, we fit a beta regression model to the distribution of performance metrics on a set of 2,880 grid search models and evaluate the resultant first- and second-order effects to explore how parameterization affects model performance. Comparing to SimLex-999, we show that clinical data can be used in an evaluation task with comparable optimal parameter settings as standard NLP evaluation datasets.

...