Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sch Psychol ; 38(3): 160-172, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37184958

RESUMEN

Curriculum-based measurement (CBM) has conventionally included accuracy criteria with recommended fluency thresholds for instructional decision-making. Some scholars have argued for the use of accuracy to directly determine instructional need (e.g., Szadokierski et al., 2017). However, accuracy and fluency have not been directly examined to determine their separate and joint value for decision-making in CBM prior to this study. Instead, there was an assumption that instruction that emphasized accurate responding should be monitored with accuracy data, which evolved into the use of complementing CBM fluency scores with accuracy or using timed assessment to compute percent of responses correct and using accuracy criteria to determine instructional need. The purpose of this article was to examine fluency and accuracy as related but distinct metrics with psychometric properties and associated benefits and limits. Findings suggest that the redundancy between accuracy and fluency causes them to perform comparably overall, but that (a) fluency is superior to accuracy when accuracy is computed on a timed sample of performance, (b) timed accuracy adds no benefit relative to fluency alone, and (c) accuracy when collected under timed assessment conditions has substantial psychometric limitations that make it unsuitable for the formative instructional decisions which are commonly made using CBM data. The conventional inclusion of accuracy criteria in tandem with fluency criteria for instructional decision-making in CBM should be reconsidered as there may be no added predictive value, but rather additional opportunity for error due to the problems associated with unfixed trials in timed assessment. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Evaluación Educacional , Lectura , Humanos , Curriculum , Psicometría
2.
Sch Psychol ; 37(3): 213-224, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35049329

RESUMEN

Math curriculum-based measurement (CBM) is an essential tool for multi-tiered systems of support decision making, but the reliability of math CBMs has received little research, particularly using more rigorous methods such as generalizability (G) theory. Math CBM is historically organized into two domains: mastery measures and general outcome measures. This paper details 17 concurrent G and dependability studies in a partially crossed design investigating the reliability of mastery measure CBMs for students (N = 263) in Grades K, 1, 3, 5, and 7. This study extends prior research by including novel grade levels and more rigorous math content; using generated rather than static measures; embedding a replication; examining bias by race and sex; and evaluating a simpler scoring method of answers correct as compared to digits correct. Most of the variance in scores was accounted for by student. Probe form effects accounted for less than 5% of the variance for 16 of 17 measures and results replicated across days. G coefficients exceeded .75 on the first trial for 14 of 17 measures. G studies were repeated by race, sex, and scoring metric. Overall, 1-4 min of assessment was sufficient to meet reliability thresholds, which exceeds prior findings for general outcome measures. This study supports the reliability of mastery measurement in math CBM and as a precise tool to be used in the screening process. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Curriculum , Evaluación Educacional , Evaluación Educacional/métodos , Humanos , Matemática , Reproducibilidad de los Resultados , Estudiantes
3.
J Sch Psychol ; 80: 54-65, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32540090

RESUMEN

Given limited resources, schools are encouraged to consider not only what works, but also at what cost. Cost-effectiveness analysis offers a formal methodology to conceptualize and calculate the ratio of the costs to implement an intervention to its effects (i.e., incremental cost-effectiveness ratios). This study used the ingredients method to analyze secondary data from a randomized controlled trial (N = 537 fourth- and fifth-grade students) to calculate the cost-effectiveness of a classwide math intervention, and provides an overview of cost-effectiveness analysis for readers unfamiliar with the formal methodology. For fourth-graders, the incremental cost-effectiveness ratio was $169.07, indicating it cost $169.07 per student for a 1 standard deviation increase in scaled scores on the state assessment. For fifth-graders, there was no statistically significant effect on the state assessment, but there were improvements in curriculum-based measurement (CBM) scores with incremental cost-effectiveness ratios ranging from $65.08 to $469.12, depending on the type of CBM probe and implementation context. Additionally, using number-needed-to-treat (i.e., the number of participants who must be provided with the intervention to prevent one failure on the state assessment), the cost was $126.90 to prevent failure on the state assessment for one fourth-grade student receiving special education services or for one student who scored below the 25th percentile on the prior year's state assessment. Implications and directions for future research are discussed.


Asunto(s)
Análisis Costo-Beneficio , Matemática , Servicios de Salud Escolar/economía , Instituciones Académicas/economía , Niño , Curriculum , Femenino , Humanos , Masculino , Números Necesarios a Tratar , Proyectos de Investigación , Estudiantes
4.
Sch Psychol Q ; 31(1): 28-42, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26192390

RESUMEN

Several scholars have recommended using data from neuropsychological tests to develop interventions for reading and mathematics. The current study examined the effects of using neuropsychological data within the intervention process with meta-analytic procedures. A total of 1,126 articles were found from an electronic search and compared to inclusion criteria, which resulted in 37 articles that were included in the current study. Each article was coded based on how the data were used (screening-86% or designing interventions-14%), size of the group for which interventions were delivered (small group-45%, individual students-45%, or entire classroom-10%), and type of data collected (cognitive functions-24%, reading fluency-33%, phonemic/phonological awareness-35%, or mixed-8%). A corrected Hedges' g was computed for every study and reported for variables of interest. A Fail-safe N was also computed to determine how many studies with a zero effect would have to be found to change the conclusions. The data resulted in a small effect (g = 0.17) for measures of cognitive functioning, but moderate effects of g = 0.43 and g = 0.48 for measures of reading fluency and phonemic/phonological awareness. There were few studies that examined measures of cognitive functioning within the intervention process. Taken together with previous research, the data do not support the use of cognitive measures to develop interventions but instead favor more direct measures of academic skills (e.g., reading fluency) in a skill-by-treatment interaction. Implications for practice and future research are discussed.


Asunto(s)
Cognición/fisiología , Matemática , Lectura , Enseñanza , Humanos , Pruebas Neuropsicológicas
5.
Behav Modif ; 27(2): 191-216, 2003 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-12705105

RESUMEN

This study compared two strategies for increasing accurate responding on a low-preference academic task by interspersing presentations of a preferred academic task. Five children attending a preschool program for children with delayed language development participated in this study. Preferred and nonpreferred tasks were identified through a multiple-stimulus, free-operant preference assessment. Contingent access to a preferred academic task was associated with improved response accuracy when compared to noncontingent access to that activity for 3 students. For 1 student, noncontingent access to the preferred activity led to improved response accuracy, and 1 student's analysis suggested the importance of procedural variety. The implications of these findings for use of preference assessments to devise instructional sequences that improve student responding are discussed.


Asunto(s)
Conducta de Elección , Trastornos del Lenguaje , Enseñanza/métodos , Preescolar , Femenino , Humanos , Trastornos del Lenguaje/terapia , Masculino , Análisis y Desempeño de Tareas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...