Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 451
Filter
1.
Dyslexia ; 30(3): e1777, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38952195

ABSTRACT

This article aims to assist practitioners in understanding dyslexia and other reading difficulties and assessing students' learning needs. We describe the essential components of language and literacy, universal screening, diagnostic assessments, curriculum-based measurement and eligibility determination. We then introduce four diagnostic assessments as examples, including norm-referenced assessments (i.e. the Comprehensive Test of Phonological Processing second edition and the Woodcock-Johnson IV Tests of Achievement) and criterion-referenced assessments (i.e. the Gallistel-Ellis Test of Coding Skills and the Dynamic Indicators of Basic Early Literacy Skills). Finally, We use a makeup case as a concrete example to illustrate how multiple diagnostic assessments are recorded and how the results can be used to inform intervention and eligibility for special education services.


Subject(s)
Dyslexia , Humans , Dyslexia/diagnosis , Child , Reading , Educational Measurement/standards , Language Tests/standards , Students , Literacy , Education, Special
2.
Lang Speech Hear Serv Sch ; 55(3): 918-937, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38889198

ABSTRACT

PURPOSE: We investigated and compared the outcomes from two standardized, norm-referenced screening assessments of language (i.e., Clinical Evaluation of Language Fundamentals Preschool-Second Edition [CELFP-2], Diagnostic Evaluation of Language Variation-Screening Test [DELV-ST]) with African American preschoolers whose spoken dialect differed from that of General American English (GAE). We (a) described preschoolers' performance on the CELFP-2 Core Language Index (CLI) and its subtests with consideration of degree of dialect variation (DVAR) observed, (b) investigated how the application of dialect-sensitive scoring modifications to the expressive morphology and syntax Word Structure (WS) subtest affected CELFP-2 CLI scores, and (c) evaluated the screening classification agreement rates between the DELV-ST and the CELFP-2 CLI. METHOD: African American preschoolers (N = 284) completed the CELFP-2 CLI subtests (i.e., Sentence Structure, WS, Expressive Vocabulary) and the DELV-ST. Density of spoken dialect use was estimated with the DELV-ST Part I Language Variation Status, and percentage of DVAR was calculated. The CELFP-2 WS subtest was scored with and without dialect-sensitive scoring modifications. RESULTS: Planned comparisons of CELFP-2 CLI performance indicated statistically significant differences in performance based on DELV-ST-determined degree of language variation groupings. Scoring modifications applied to the WS subtest increased subtest scaled scores and CLI composite standard scores. However, preschoolers who demonstrated strong variation from GAE continued to demonstrate significantly lower performance than preschoolers who demonstrated little to no language variation. Affected-status agreement rates between assessments (modified and unmodified CELFP-2 CLI scores and DELV-ST Part II Diagnostic Risk Status) were extremely low. CONCLUSIONS: The application of dialect-specific scoring modifications to standardized, norm-referenced assessments of language must be simultaneously viewed through the lenses of equity, practicality, and psychometry. The results of our multistage study reiterate the need for reliable methods of identifying risk for developmental language disorder within children who speak American English dialects other than GAE. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.26017978.


Subject(s)
Black or African American , Language Tests , Humans , Child, Preschool , Female , Male , Language Tests/standards , Child Language , Language Development Disorders/diagnosis , Language Development Disorders/ethnology , Language
3.
Am J Speech Lang Pathol ; 33(4): 1986-2001, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38838249

ABSTRACT

PURPOSE: Prior work has identified weaknesses in commonly used indices of lexical diversity in spoken language samples, such as type-token ratio (TTR) due to sample size and elicitation variation, we explored whether TTR and other diversity measures, such as number of different words/100 (NDW), vocabulary diversity (VocD), and the moving average TTR would be more sensitive to child age and clinical status (typically developing [TD] or developmental language disorder [DLD]) if samples were obtained from standardized prompts. METHOD: We utilized archival data from the norming samples of the Test of Narrative Language and the Edmonton Narrative Norms Instrument. We examined lexical diversity and other linguistic properties of the samples, from a total of 1,048 children, ages 4-11 years; 798 of these were considered TD, whereas 250 were categorized as having a language learning disorder. RESULTS: TTR was the least sensitive to child age or diagnostic group, with good potential to misidentify children with DLD as TD and TD children as having DLD. Growth slopes of NDW were shallow and not very sensitive to diagnostic grouping. The strongest performing measure was VocD. Mean length of utterance, TNW, and verbs/utterance did show both good growth trajectories and ability to distinguish between clinical and typical samples. CONCLUSIONS: This study, the largest and best controlled to date, re-affirms that TTR should not be used in clinical decision making with children. A second popular measure, NDW, is not measurably stronger in terms of its psychometric properties. Because the most sensitive measure of lexical diversity, VocD, is unlikely to gain popularity because of reliance on computer-assisted analysis, we suggest alternatives for the appraisal of children's expressive vocabulary skill.


Subject(s)
Child Language , Language Development Disorders , Language Tests , Vocabulary , Humans , Child, Preschool , Child , Male , Female , Language Development Disorders/diagnosis , Language Tests/standards , Age Factors , Reproducibility of Results
4.
Lang Speech Hear Serv Sch ; 55(3): 904-917, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38776269

ABSTRACT

PURPOSE: Oral language skills provide a critical foundation for formal education and especially for the development of children's literacy (reading and spelling) skills. It is therefore important for teachers to be able to assess children's language skills, especially if they are concerned about their learning. We report the development and standardization of a mobile app-LanguageScreen-that can be used by education professionals to assess children's language ability. METHOD: The standardization sample included data from approximately 350,000 children aged 3;06 (years;months) to 8;11 who were screened for receptive and expressive language skills using LanguageScreen. Rasch scaling was used to select items of appropriate difficulty on a single unidimensional scale. RESULTS: LanguageScreen has excellent psychometric properties, including high reliability, good fit to the Rasch model, and minimal differential item functioning across key student groups. Girls outperformed boys, and children with English as an additional language scored less well compared to monolingual English speakers. CONCLUSIONS: LanguageScreen provides an easy-to-use, reliable, child-friendly means of identifying children with language difficulties. Its use in schools may serve to raise teachers' awareness of variations in language skills and their importance for educational practice.


Subject(s)
Language Tests , Mobile Applications , Psychometrics , Humans , Child , Mobile Applications/standards , Male , Female , Language Tests/standards , Child, Preschool , Reproducibility of Results , Psychometrics/instrumentation , Psychometrics/standards , Psychometrics/methods , Child Language , Language Development Disorders/diagnosis
5.
PLoS One ; 19(5): e0303810, 2024.
Article in English | MEDLINE | ID: mdl-38787889

ABSTRACT

BACKGROUND: The current study aimed to validate the Cantonese version of the Amsterdam-Nijmegen Everyday Language Test (CANELT), a functional communication assessment tool for Cantonese speakers with aphasia. A quantitative scoring method was adopted to examine the pragmatics and informativeness of the production of people with aphasia (PWA). METHOD: CANELT was translated from its English version with cultural adaptations. The performance on the 20-item CANELT collected from 56 PWA and 100 neurologically healthy Cantonese-speaking controls aged 30 to 79 years was orthographically transcribed. Scoring was based on the completeness of the main concepts produced in the preamble and subsequent elaborations, defined as Opening (O) and New Information (NI). Measures examining the validity and reliability were conducted. RESULTS: An age effect was found in neurologically healthy controls, and therefore z scores were used for subsequent comparisons between neurologically healthy controls and PWA. The test showed strong evidence for known-group validity in both O [χ2 (2) = 95.2, p < .001] and NI [χ2 (2) = 100.4, p < .001]. A moderate to strong correlation was found between CANELT and standardized aphasia assessment tools, suggesting satisfactory concurrent validity. Reliability measures were excellent in terms of internal consistency (Cronbach's α of .95 for both 'O' and 'NI'), test-retest reliability (ICC = .96; p < .001), intra-rater reliability (ICC = 1.00; p < .001), and inter-rater reliability for O (ICC = .99; p < .001) and NI (ICC = .99; p < .001). Sensitivity and specificity for O are 97% and 76.8%, respectively, while for NI, a sensitivity of 95% and specificity of 91.1% were obtained. CONCLUSIONS: Measures on validity and reliability yielded promising results, suggesting CANELT as a useful and reliable functional communication assessment for PWA. Its application in managing PWA and potential areas for development are discussed.


Subject(s)
Aphasia , Language Tests , Humans , Middle Aged , Female , Male , Aged , Adult , Aphasia/diagnosis , Aphasia/physiopathology , Language Tests/standards , Reproducibility of Results , Language
6.
PLoS One ; 16(11): e0258946, 2021.
Article in English | MEDLINE | ID: mdl-34793469

ABSTRACT

The lack of standardized language assessment tools in Russian impedes clinical work, evidence-based practice, and research in Russian-speaking clinical populations. To address this gap in assessment of neurogenic language disorders, we developed and standardized a new comprehensive assessment instrument-the Russian Aphasia Test (RAT). The principal novelty of the RAT is that each subtest corresponds to a specific level of linguistic processing (phonological, lexical-semantic, syntactic, and discourse) in different domains: auditory comprehension, repetition, and oral production. In designing the test, we took into consideration various (psycho)linguistic factors known to influence language performance, as well as specific properties of Russian. The current paper describes the development of the RAT and reports its psychometric properties. A tablet-based version of the RAT was administered to 85 patients with different types and severity of aphasia and to 106 age-matched neurologically healthy controls. We established cutoff values for each subtest indicating deficit in a given task and cutoff values for aphasia based on the Receiver Operating Characteristic curve analysis of the composite score. The RAT showed very high sensitivity (> .93) and specificity (> .96), substantiating its validity for determining presence of aphasia. The test's high construct validity was evidenced by strong correlations between subtests measuring similar linguistic processes. The concurrent validity of the test was also strong as demonstrated by a high correlation with an existing aphasia battery. Overall high internal, inter-rater, and test-retest reliability were obtained. The RAT is the first comprehensive aphasia language battery in Russian with properly established psychometric properties. It is sensitive to a wide range of language deficits in aphasia and can reliably characterize individual profiles of language impairments. Notably, the RAT is the first comprehensive aphasia test in any language to be fully automatized for administration on a tablet, maximizing further standardization of presentation and scoring procedures.


Subject(s)
Aphasia/diagnosis , Language Tests/standards , Language , Psychometrics , Adolescent , Adult , Aphasia/epidemiology , Aphasia/pathology , Aphasia/psychology , Comprehension/physiology , Computers , Female , Humans , Male , Middle Aged , Reference Standards , Russia/epidemiology , Semantics , Young Adult
7.
Distúrb. comun ; 33(2): 195-203, jun. 2021. tab, ilus
Article in Portuguese | LILACS | ID: biblio-1400828

ABSTRACT

Objetivo: descrever instrumentos fonoaudiológicos de avaliação de linguagem oral publicados em periódicos brasileiros e analisar procedimentos de validação utilizados. Método: Casuística: publicações de todos os volumes dos periódicos Audiology Communication Research, Revista CEFAC, Revista Distúrbios da Comunicação e Communication Disorders and Sciences no período de janeiro/2017 a julho/2019. As publicações foram selecionadas a partir dos títulos, resumos, descritores e leitura dos textos na íntegra. As publicações selecionadas foram categorizadas de acordo com as variáveis: periódico, volume/número, data, objetivo (elaboração ou tradução/adaptação de instrumentos de avaliação de linguagem oral), língua original (nos casos de tradução/adaptação), tamanho da amostra e técnicas estatísticas de validação (validade e confiabilidade) utilizadas. Resultados: A maioria dos artigos encontrados foi destinada a crianças e se propõe ao desenvolvimento de um novo instrumento. Destaca-se o predomínio de trabalhos que apresentaram validação de conteúdo, no entanto poucos realizaram o teste de confiabilidade pelo alfa Cronbach. Apenas 01 estudo realizou teste de sensibilidade e especificidade, e nenhum estudo publicado no período realizou cálculo de valor preditivo, razão de verossimilhança ou curva ROC. Conclusão: os resultados indicam limitações nos estudos de validação e sugerem cautela quanto à utilização dos instrumentos de avaliação de linguagem, tanto na atividade clínica quanto em pesquisas.


Purpose: to describe speech therapy instruments of oral language evaluation published in Brazilian periodicals, and to analyze the validation procedures used. Method: Casuistry: All volumes from the periodicals Audiology Communication Research (ACR), Revista CEFAC (CEFAC), Revista Distúrbios da Comunicação (DIC) and Communication Disorders and Sciences (CoDAS) published from January/2016 to July/2019. Publishing's were selected from titles, abstracts and descriptors, to full text readings and then were categorized according to the following variables: periodical, volume/number, date, purpose (elaboration or translation/adaptation of oral language evaluation instruments), original language (in translation/adaptation cases), sample size and statistics techniques used (validity and reliability). Results: Most of the articles were intended for children and are intended to develop a new instrument. The predominance of studies that presented content validation is noteworthy; however, few performed the reliability test by alpha Cronbach. It was also found that only one study performed a sensitivity and specificity test and no study published in the studied period performed a predictive value calculation, likelihood ratio or ROC curve. Conclusion: the results indicate limitations in validation studies and suggest caution regarding the use of language assessment instruments, both in clinical activity and in research.


Objetivo: describir los instrumentos de evaluación del habla y el lenguaje publicados en revistas brasileñas y analizar los procedimientos de validación utilizados. Método: publicaciones de todos los volúmenes de las revistas Audiology Communication Research, Revista CEFAC, Revista Distúrbios da Comunicação y Communication Disorders and Sciences de enero / 2017 a julio / 2019. Las publicaciones fueron seleccionadas a partir de los títulos, resúmenes, descriptores y lectura de los textos en su totalidad. Las publicaciones seleccionadas se categorizaron según las variables: revista, volumen / número, fecha, objetivo (elaboración o traducción / adaptación de instrumentos de evaluación para lengua oral), lengua original (en el caso de traducción / adaptación), tamaño de la muestra y técnicas de validación estadística (validez y fiabilidad) utilizadas. Resultados: La mayoría de los artículos encontrados estaban destinados a niños y están destinados a desarrollar un nuevo instrumento. Se destaca el predominio de trabajos que mostraron validación de contenido, sin embargo pocos realizaron la prueba de confiabilidad por alpha Cronbach. Solo 01 estudio realizó una prueba de sensibilidad y especificidad y ningún estudio publicado en el período realizó un cálculo de valor predictivo, razón de verosimilitud o curva ROC. Conclusión: los resultados indican limitaciones en los estudios de validación y sugieren cautela en el uso de instrumentos de evaluación del lenguaje tanto en la actividad clínica como en la investigación.


Subject(s)
Humans , Male , Female , Periodicals as Topic/statistics & numerical data , Speech, Language and Hearing Sciences , Language Tests/standards , Bibliometrics , Cross-Sectional Studies , Validation Studies as Topic
8.
PLoS One ; 16(4): e0248986, 2021.
Article in English | MEDLINE | ID: mdl-33822802

ABSTRACT

We study correlations between the structure and properties of a free association network of the English language, and solutions of psycholinguistic Remote Association Tests (RATs). We show that average hardness of individual RATs is largely determined by relative positions of test words (stimuli and response) on the free association network. We argue that the solution of RATs can be interpreted as a first passage search problem on a network whose vertices are words and links are associations between words. We propose different heuristic search algorithms and demonstrate that in "easily-solving" RATs (those that are solved in 15 seconds by more than 64% subjects) the solution is governed by "strong" network links (i.e. strong associations) directly connecting stimuli and response, and thus the efficient strategy consist in activating such strong links. In turn, the most efficient mechanism of solving medium and hard RATs consists of preferentially following sequence of "moderately weak" associations.


Subject(s)
Language Tests/standards , Word Association Tests/standards , Algorithms , Humans , Language , Psycholinguistics/methods
9.
Appl Neuropsychol Child ; 10(1): 37-52, 2021 Jan.
Article in English | MEDLINE | ID: mdl-31076015

ABSTRACT

Arabic is characterized by extensive dialectal variation, diglossia, and substantial morphological complexity. Arabic lacks comprehensive diagnostic tools that would allow for a systematic evaluation of its development, critical for the early identification of language difficulties in the spoken and written domains. To address this gap, we have developed an assessment battery called Arabic Language: Evaluation of Function (ALEF), aimed at children aged 3 to 11 years. ALEF consists of 17 subtests indexing different language domains, modalities, and associated skills and representational systems. We administered the ALEF battery to native Gulf Arabic-speaking children (n = 467; ages 2.5 to 10.92; 55% boys; 20 children in each 6-month age band) in Saudi Arabia in two data collection waves. Analyses examining the psychometric properties of the instrument indicated that after the removal of misfitting items, the ALEF subtests had reliability coefficients in the range from 0.78 to 0.98, and resulting subtest scores displayed a consistent profile of positive intercorrelations and age effects. Taken together, the results indicate that the ALEF battery has good psychometric properties, and can be used for the purpose of evaluating early language development in Gulf Arabic speaking children, pending further refinement of the test structure, examination of gender-related differential item functioning, and norming.


Subject(s)
Language Development Disorders/diagnosis , Language Development , Language Tests/standards , Psychometrics/standards , Child , Child, Preschool , Female , Humans , Infant , Male , Psycholinguistics , Reproducibility of Results , Saudi Arabia
10.
Lang Speech Hear Serv Sch ; 52(1): 288-303, 2021 01 19.
Article in English | MEDLINE | ID: mdl-33007163

ABSTRACT

Purpose Standardized norm-referenced tests are an important aspect of language assessment for school-age children. This study explored the language test selection practices of school-based speech-language pathologists (SLPs) working with elementary school children suspected of having developmental language disorder. Specifically, we investigated which tests were most commonly selected as clinicians' first-choice and follow-up tests, which factors impacted their test selection decisions, and what sources of information they used to determine the psychometric quality of tests. Method School-based SLPs completed a web-based questionnaire regarding their use of norm-referenced language tests. A total of 370 elementary school SLPs completed the questionnaire. Results The vast majority of participants indicated that omnibus language tests are their first choice of test. For follow-up tests, participants selected semantics tests, especially single-word vocabulary tests, significantly more often than tests of pragmatics, processing skills, and morphology/syntax. Participants identified multiple factors as affecting test selection, including availability, familiarity, psychometric features, and others. Although more SLPs reported using data-based than subjective sources of information to judge the psychometric quality of tests, a substantial proportion reported that they relied on subjective sources. Conclusions Clinicians have a strong preference for using omnibus language tests. Follow-up test selection does not appear to align with the language difficulties most associated with developmental language disorder. The substantial use of subjective information about psychometric qualities of tests suggests that many SLPs may not attend to the technical meanings of terms such as validity, reliability, and diagnostic accuracy. These results indicate a need for improvement in evidence-based language assessment practices. Supplemental Material https://doi.org/10.23641/asha.13022471.


Subject(s)
Child Language , Language Development Disorders/diagnosis , Language Tests/standards , Schools , Speech-Language Pathology/methods , Child , Data Accuracy , Evidence-Based Practice , Female , Follow-Up Studies , Humans , Male , Psychometrics , Reproducibility of Results , Surveys and Questionnaires
11.
Rev. CEFAC ; 23(2): e12520, 2021. tab, graf
Article in English | LILACS | ID: biblio-1287869

ABSTRACT

ABSTRACT Purpose: to analyze the content and translation guidelines of instruments meant to assess language, speech sound production, and communicative skills of children, adapted to Brazilian Portuguese. Methods: a search was conducted in national and international databases to select articles on the assessment of language, speech, and communicative skills in children, considering the descriptors "translation", "adaptation", "cultural adaptation", "cross-cultural adaptation", "language", "speech", and "pragmatic". The search was conducted in the SciELO, Virtual Health Library, PubMed/MEDLINE, and Latin American and Caribbean Health Sciences Literature. Results: eight assessment instruments compatible with the inclusion criteria were found. Conclusion: of the instruments found, four approached specific investigations, such as syntax, narrative, pragmatic skills, and speech sound organization, while the other four had a more encompassing profile, verifying form, content, and/or use (pragmatics). Concerning the guidelines, the most recurrent stages between the translation proposals were translation, conciliation of the previous stage or synthesis version, back-translation, reviewing committee, pretest, and final version. The conceptual, item and operational equivalences were frequently cited for verification.


RESUMO Objetivo: analisar o conteúdo e as diretrizes de tradução de instrumentos de avaliação de linguagem, da produção dos sons da fala e das habilidades comunicativas para crianças, adaptados para a língua portuguesa do Brasil. Métodos: a partir da busca em base de dados nacionais e internacionais, foram selecionados artigos que tratassem da avaliação da linguagem, fala e habilidades comunicativas em crianças considerando os descritores "tradução", "adaptação", "adaptação cultural", "adaptação transcultural", "linguagem", "fala" e "pragmática". A pesquisa ocorreu nas bases de dados Scielo, Biblioteca Virtual em Saúde, Pubmed/Medline e Centro Latino-Americano e do Caribe de Informação em Ciências da Saúde. Resultados: oito instrumentos de avaliação compatíveis com os critérios de inclusão foram encontrados. Conclusão: dentre os instrumentos encontrados, quatro são destinados a investigações pontuais, como sintaxe, narrativa, habilidades pragmáticas e organização dos sons da fala e os outros quatro têm perfil mais abrangente e averiguam forma, conteúdo e/ou uso (pragmática). A respeito das diretrizes, as etapas mais recorrentes entre as propostas tradutórias foram: tradução, conciliação da etapa anterior ou versão síntese, retrotradução, comitê de revisão, pré-teste e versão final. No que tange a verificação das equivalências, a conceitual, de item e operacional foram frequentemente citadas.


Subject(s)
Humans , Language Tests/standards , Translating , Brazil , Cross-Cultural Comparison
12.
S Afr J Commun Disord ; 67(1): e1-e10, 2020 Nov 19.
Article in English | MEDLINE | ID: mdl-33314952

ABSTRACT

BACKGROUND: This study used local resources- community members, photographer and speech therapists to develop a new test for screening receptive language skills and sought to determine its feasibility for use with a larger population in KwaZulu-Natal province, South Africa. OBJECTIVES: The aim of this study was to develop a one-word receptive vocabulary test appropriate for screening and diagnosis of isiZulu-speaking preschool-aged children. The objectives were (1) to determine sensitivity and specificity of the Ingwavuma Receptive Vocabulary Test (IRVT) and (2) to determine the relationship of IRVT scores with age, gender, time and the confounding variables of stunting and school. METHOD: The study was quantitative, cross-sectional and descriptive in nature. The IRVT was piloted before being administered to 51 children (4-6 years old). Statistical analysis of test item prevalence, correlations to confounding variables and validity measurements were conducted using Statistical Package for Social Scientists version 25 (SPSS 25). RESULTS: The IRVT was able to profile the receptive skills for the preschool children in Ingwavuma. The mean raw score for boys was 35, and 32 for girls. There was a significant Pearson correlation between test scores and age (0.028, p 0.05) with a high effect size (Cohen's d = 0. 949), gender (r = -0.032, p 0.05) with a medium effect size (Cohen's d = 0.521) and school (r = 0.033, p 0.05) with a small effect size (Cohen's d = 0.353). The sensitivity and specificity values were 66.7% and 33%, respectively. The test reliability (Cronbach's alpha) was 0.739, with a good test-retest reliability. CONCLUSION: The IRVT has potential as a screening test for isiZulu receptive vocabulary skills amongst preschool children. This study contributes to a development of clinical and research resources for assessing language abilities.


Subject(s)
Child Language , Language Development Disorders/diagnosis , Language Tests/standards , Child , Child, Preschool , Cross-Sectional Studies , Feasibility Studies , Female , Humans , Male , Reproducibility of Results , Sensitivity and Specificity , South Africa , Vocabulary
13.
J Speech Lang Hear Res ; 63(12): 3982-3990, 2020 12 14.
Article in English | MEDLINE | ID: mdl-33186507

ABSTRACT

Purpose There has been increased interest in using telepractice for involving more diverse children in research and clinical services, as well as when in-person assessment is challenging, such as during COVID-19. Little is known, however, about the feasibility, reliability, and validity of language samples when conducted via telepractice. Method Child language samples from parent-child play were recorded either in person in the laboratory or via video chat at home, using parents' preferred commercially available software on their own device. Samples were transcribed and analyzed using Systematic Analysis of Language Transcripts software. Analyses compared measures between-subjects for 46 dyads who completed video chat language samples versus 16 who completed in-person samples; within-subjects analyses were conducted for a subset of 13 dyads who completed both types. Groups did not differ significantly on child age, sex, or socioeconomic status. Results The number of usable samples and percent of utterances with intelligible audio signal did not differ significantly for in-person versus video chat language samples. Child speech and language characteristics (including mean length of utterance, type-token ratio, number of different words, grammatical errors/omissions, and child speech intelligibility) did not differ significantly between in-person and video chat methods. This was the case for between-group analyses and within-child comparisons. Furthermore, transcription reliability (conducted on a subset of samples) was high and did not differ between in-person and video chat methods. Conclusions This study demonstrates that child language samples collected via video chat are largely comparable to in-person samples in terms of key speech and language measures. Best practices for maximizing data quality for using video chat language samples are provided.


Subject(s)
COVID-19 , Language Disorders/diagnosis , Language Tests/standards , Speech Production Measurement/standards , Telemedicine/standards , Child Language , Child, Preschool , Feasibility Studies , Female , Humans , Infant , Longitudinal Studies , Male , Non-Randomized Controlled Trials as Topic , Reproducibility of Results , SARS-CoV-2 , Speech Intelligibility , Speech Production Measurement/methods , Telemedicine/methods
14.
Span J Psychol ; 23: e39, 2020 Oct 15.
Article in English | MEDLINE | ID: mdl-33054889

ABSTRACT

Sentence repetition tasks have been widely used in the last years as a diagnostic tool in developmental language disorders. However in Spanish there are few (if any) of these instruments, especially for younger children. In this context, we develop a new Sentence Repetition Task for assessing language (morphosyntactic) abilities of very young Spanish children. A list of 33 sentences of different length and complexity was created and included in the task. A total of 130 typical developing children from 2 to 4 years of age were engaged in a play situation and asked to repeat the sentences. Children's answers were scored for accuracy at sentence and word level and error analysis at the word level was undertaken. Besides a subsample of 92 children completed a non-word repetition task. First results show its adequacy to children from 2 to 4 years of age, its capacity to discriminate between different developmental levels, and its concurrent validity with the nonword repetition task.


Subject(s)
Language Development , Language Tests/standards , Child, Preschool , Female , Humans , Male , Play and Playthings , Reproducibility of Results
15.
S Afr J Commun Disord ; 67(1): e1-e8, 2020 Sep 04.
Article in English | MEDLINE | ID: mdl-32896134

ABSTRACT

BACKGROUND: Specific language impairment (SLI) is difficult to identify because it is a subtle linguistic difficulty, and there are a few measures available to differentiate between typical and atypical language development in bilinguals. Sentence repetition (SR) has strong theoretical foundations and research evidence as a valid tool for the identification of SLI in bilinguals. OBJECTIVE: This study assessed the value of SR using peer group comparisons to identify Sepedi-English bilingual children at the risk of SLI. METHOD: One hundred and two Grade 3 learners in three different contexts of education were assessed on equivalent English and Sepedi SR measures. RESULTS: Eleven participants who scored between 1 and 2 standard deviations (SD) below the peer group means on both the English and Sepedi SR tests were identified with possible SLI. Learners in the English language of learning and teaching (LoLT) - Sepedi additional language (SAL) context obtained similar scores in both languages, a higher score in English than the English LoLT group and a higher score in Sepedi than the Sepedi LoLT - EAL group. The English LoLT group obtained a significantly higher score in English than in Sepedi and a significantly lower score than the other two groups in Sepedi. The Sepedi LoLT group obtained a significantly higher score in Sepedi than in English, their additional language, in which they obtained a significantly lower score than the other two groups. CONCLUSION: Sentence repetition tasks are valid screening tools to identify bilingual children with SLI by comparing them to peer groups. The SR tests were sensitive to language practices in different educational contexts. It was observed that a bilingual approach that uses both English and the home language as academic languages leads to better language outcomes.


Subject(s)
Child Language , Language Tests/standards , Multilingualism , Specific Language Disorder/diagnosis , Students/psychology , Child , Cross-Sectional Studies , Female , Humans , Male , Repetition Priming , Reproducibility of Results
16.
Medicine (Baltimore) ; 99(37): e22165, 2020 Sep 11.
Article in English | MEDLINE | ID: mdl-32925781

ABSTRACT

Aphasia shows high incidence in stroke patients and seriously impairs language comprehension, verbal communication, and social activities. Therefore, screening aphasic patients during the acute phase of stroke is crucial for language recovery and rehabilitation. The present study developed a Chinese version of the Language Screening Test (CLAST) and validated it in post-stroke patients.The CLAST was adapted from the Language Screening Test developed by Constance et al to incorporate Chinese cultural and linguistic specificities, and administered to 207 acute stroke patients and 89 stabilized aphasic or non-aphasic patients. Based on the Western Aphasia Battery (WAB) test, its reliability and validity were assessed. A cut-off for the CLAST in Chinese patients was determined by ROC curve analysis.The CLAST comprised 5 subtests and 15 items, including 2 subscores, namely expression (8 points, assessing naming, repetition, and automatic speech) and receptive (7 points maximum, evaluating picture recognition, and verbal instructions) indexes. Analysis of the alternate-form reliability of the questionnaire showed a retest correlation coefficient of 0.945 (P < .001). Intraclass correlation coefficients of three rating teams were >0.98 (P < .001). Internal consistency analysis showed a Cronbach's alpha coefficient of 0.909 (P < .001). The non-aphasia group showed higher scores than the aphasia group (14.2 ±â€Š1.3 vs 10.6 ±â€Š3.8) (P < .01). The questionnaire showed good construct validity by factor analysis. ROC curve analysis showed high sensitivity and specificity for the CLAST, with a cut-off of 13.5.The CLAST is suitable for Chinese post-stroke patients during the acute phase, with high reliability, validity, sensitivity, and specificity.


Subject(s)
Aphasia/diagnosis , Aphasia/etiology , Language Tests/standards , Stroke/complications , Aged , Aged, 80 and over , China , Female , Humans , Language , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
17.
Lang Speech Hear Serv Sch ; 51(4): 1112-1123, 2020 10 02.
Article in English | MEDLINE | ID: mdl-32910720

ABSTRACT

Purpose The purpose of this study was to examine how well students' response to a morphological vocabulary intervention can be predicted before the start of the intervention from traditional static assessments and to determine whether a dynamic assessment with graduated prompts improves the prediction. Method A planned secondary analysis of a randomized trial of a morphological vocabulary intervention for fifth-grade students with limited vocabulary was conducted. Response to this intervention was examined for 111 participants based on their development in definitions of morphologically transparent words from pretest to posttest. Traditional static measures of vocabulary, knowledge of morphology, and morphological analysis as well as a dynamic assessment of morphological analysis were evaluated as predictors of students' response to intervention. Results The static pretest measures predicted more than half of the overall variance in students' response to intervention and provided a good classification of students with subsequent poor or good response to intervention. The single best static predictor was the static assessment of morphological analysis. Furthermore, the dynamic assessment added significantly to the prediction of the overall variance in students' response to intervention and to the correct early classification of students as poor or good responders. Conclusions The results suggest that an acceptable level of prediction of students' response to morphological vocabulary intervention can be obtained by means of a couple of static morphological measures. This study also provides evidence for the added predictive value of a dynamic assessment of morphological analysis.


Subject(s)
Language Tests/standards , Reading , Vocabulary , Child , Female , Humans , Knowledge , Male , Students
18.
J Clin Exp Neuropsychol ; 42(5): 459-472, 2020 07.
Article in English | MEDLINE | ID: mdl-32397824

ABSTRACT

INTRODUCTION: Embedded performance validity tests (PVTs) allow for continuous and economical validity assessment during neuropsychological evaluations; however, similar to their freestanding counterparts, a limitation of well-validated embedded PVTs is that the majority are memory-based. This study cross-validated several previously identified non-memory-based PVTs derived from language, processing speed, and executive functioning tests within a single mixed clinical neuropsychiatric sample with and without cognitive impairment. METHOD: This cross-sectional study included data from 124 clinical patients who underwent outpatient neuropsychological evaluation. Validity groups were determined by four independent criterion PVTs (failing ≤1 or ≥2), resulting in 98 valid (68% cognitively impaired) and 26 invalid performances. In total, 23 previously identified embedded PVTs derived from Verbal Fluency (VF), Trail Making Test (TMT), Stroop (SCWT), and Wisconsin Card Sorting Test (WCST) were examined. RESULTS: All VF, SCWT, and TMT PVTs, along with WCST Categories, significantly differed between validity groups (ηp2 =.05-.22) with areas under the curve (AUCs) of.65-.81 and 19-54% sensitivity (≥89% specificity) at optimal cut-scores. When subdivided by impairment status, all PVTs except for WCST Failures to Maintain Set were significant (AUCs =.75-94) with 33-85% sensitivity (≥90% specificity) in the cognitively unimpaired group. Among the cognitively impaired group, most VF, TMT, and SCWT PVTs remained significant, albeit with decreased accuracy (AUCs =.65-.76) and sensitivities (19-54%) at optimal cut-scores, whereas all WCST PVTs were nonsignificant. Across groups, SCWT embedded PVTs evidenced the strongest psychometric properties. CONCLUSION: VF, TMT, and SCWT embedded PVTs generally demonstrated moderate accuracy for identifying invalid neuropsychological performance. However, performance on these non-memory-based PVTs from processing speed and executive functioning tests are not immune to the effects of cognitive impairment, such that alternate cut-scores (with reduced sensitivity if adequate specificity is maintained) are indicated in cases where the clinical history is consistent with cognitive impairment. In contrast, WCST indices generally had poor accuracy.


Subject(s)
Cognitive Dysfunction/diagnosis , Executive Function , Malnutrition/diagnosis , Neuropsychological Tests/standards , Psychomotor Performance , Adult , Cross-Sectional Studies , Executive Function/physiology , Female , Humans , Language Tests/standards , Male , Middle Aged , Outpatients , Psychomotor Performance/physiology , Reproducibility of Results , Sensitivity and Specificity , Stroop Test/standards , Trail Making Test/standards , Wisconsin Card Sorting Test/standards
19.
Dev Neurorehabil ; 23(6): 402-406, 2020 Aug.
Article in English | MEDLINE | ID: mdl-32419557

ABSTRACT

The phenotype of triple X syndrome comprises a variety of physical, psychiatric, and cognitive features. Recent evidence suggests that patients are prone to severe language impairments. However, it remains unclear whether verbal impairments are pervasive at all levels of language, or whether one domain is relatively more spared than others. Here we document the language profile of one patient with triple X, using standardized language tests and reports. Results concur in showing that impairments are noticeable both in expressive and receptive language skills, and in vocabulary as well as in structural components of language. Although receptive ability in some tests appears relatively spared, even here A's performance is clearly below average. This single case study further underscores that language and communication at all levels can be severely compromised in patients with triple X. Practitioners should be aware of the limited language abilities that possibly exist in patients with triple X.


Subject(s)
Language Development Disorders/diagnosis , Language Tests/standards , Sex Chromosome Disorders of Sex Development/diagnosis , Trisomy/diagnosis , Child , Chromosomes, Human, X , Female , Humans , Language Development Disorders/etiology , Male , Sex Chromosome Aberrations , Sex Chromosome Disorders of Sex Development/complications , Vocabulary
20.
J Fluency Disord ; 64: 105762, 2020 06.
Article in English | MEDLINE | ID: mdl-32445988

ABSTRACT

PURPOSE: The purpose of the study was to determine whether differences exist between young English- and Korean-speaking children who stutter (CWS) in the loci of stuttering. METHOD: Participants were 10 Korean-speaking and 11 English-speaking CWS between the ages of 3 and 7 years. Participants produced narratives while viewing various picture scenes and a wordless picture book. RESULTS: Findings indicated that Korean-speaking CWS stuttered more on content than function words whereas English-speaking CWS stuttered more on function than content words. Furthermore, both Korean- and English-speaking CWS tended to stutter more on utterance-initial words. These findings appear to be related to the differences in linguistic/syntactic structures between Korean and English. Specifically, in the Korean-speaking CWS's narratives, most utterance-initial words (73.60 %) were content words whereas in the English-speaking CWS's narratives, most utterance-initial words (83.57 %) were function words. CONCLUSION: These preliminary findings, although in need of replication with a larger sample size, seem to suggest that the word class (i.e., content/function words) contributions to stuttering loci are more language-specific whereas the word position (i.e., utterance-initial position) contributions to stuttering loci are more language-nonspecific. Given that the true characteristics of stuttering may be rather language-nonspecific than language-specific, further research may need to focus more on stuttering loci related to word position than word class.


Subject(s)
Language Tests/standards , Stuttering/genetics , Child , Child, Preschool , Female , Humans , Language , Male , Republic of Korea
SELECTION OF CITATIONS
SEARCH DETAIL
...