Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
1.
J Sch Psychol ; 104: 101318, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38871409

ABSTRACT

Advancing equity and justice in school mental health can address inequities in school-based services and outcome disparities. The purpose of this special issue is to promote equitable and just systems and practices in school mental health to promote change in institutional practices that have produced and reproduced inequities over time. The four articles in this special issue clarify a process for advancing equity in school mental health by addressing justice-centered variables to promote connections across and within systems to realize a vision of comprehensive and integrated school mental health.


Subject(s)
Social Justice , Humans , School Mental Health Services , Schools , Mental Health
2.
Sch Psychol ; 38(2): 110-118, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36521127

ABSTRACT

The purpose of this study was to examine the accuracy of function-based decisions made in consideration of scores from the Intervention Selection Profile-Function (ISP-Function), a tool founded upon direct behavior rating (DBR) methodology. The ISP-Function is designed to be a brief measure, given the need for efficient and low-resource assessments in schools. Data from a previous investigation were used to create data reports for each of 34 elementary students with a history of exhibiting disruptive behavior in the classroom. The first report summarized ISP-Function data that the student's classroom teacher collected. The second report was representative of more typical functional behavior assessment (FBA), summarizing data collected via a functional assessment interview with the teacher, as well as systematic direct observation data. Nine school psychologists conducted blind reviews of these reports and derived decisions regarding the function of each student's behavior (e.g., adult attention or escape/avoidance). Gwet's agreement coefficients were statistically significant and suggested fair to almost perfect correspondence between ISP-Function and FBA reports. Limitations and implications for practice are discussed herein. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Educational Personnel , Problem Behavior , Adult , Humans , Child , Child Behavior , Students , Behavior Rating Scale
3.
J Sch Psychol ; 83: 66-88, 2020 12.
Article in English | MEDLINE | ID: mdl-33276856

ABSTRACT

The purpose of this study was to support the development and initial validation of the Intervention Selection Profile (ISP)-Skills, a brief 14-item teacher rating scale intended to inform the selection and delivery of instructional interventions at Tier 2. Teacher participants (n = 196) rated five students from their classroom across four measures (total student n = 877). These measures included the ISP-Skills and three criterion tools: Social Skills Improvement System (SSIS), Devereux Student Strengths Assessment (DESSA), and Academic Competence Evaluation Scales (ACES). Diagnostic classification modeling (DCM) suggested an expert-created Q-matrix, which specified relations between ISP-Skills items and hypothesized latent attributes, provided good fit to item data. DCM also indicated ISP-Skills items functioned as intended, with the magnitude of item ratings corresponding to the model-implied probability of attribute mastery. DCM was then used to generate skill profiles for each student, which included scores representing the probability of students mastering each of eight skills. Correlational analyses revealed large convergent relations between ISP-Skills probability scores and theoretically-aligned subscales from the criterion measures. Discriminant validity was not supported, as ISP-Skills scores were also highly related to all other criterion subscales. Receiver operating characteristic (ROC) curve analyses informed the selection of cut scores from each ISP-Skills scale. Review of classification accuracy statistics associated with these cut scores (e.g., sensitivity and specificity) suggested they reliably differentiated students with below average, average, and above average skills. Implications for practice and directions for future research are discussed, including those related to the examination of ISP-Skills treatment utility.


Subject(s)
Behavior Rating Scale/standards , Students/psychology , Academic Performance , Adult , Child , Child Behavior/psychology , Emotions , Female , Humans , Male , Reproducibility of Results , Schools , Sensitivity and Specificity , Social Skills
4.
Sch Psychol ; 34(5): 531-540, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31169380

ABSTRACT

The purpose of this study was to evaluate the reliability, validity, and accuracy of scores from the Intervention Selection Profile-Function (ISP-Function): a brief functional assessment tool founded upon Direct Behavior Rating (DBR) methodology. Participants included 34 teacher-student dyads. Using the ISP-Function, teachers rated the extent to which students exhibited disruptive behavior, as well as the frequency with which disruptions were met with four consequences. Ratings were completed across three 10-min sessions, during which a research assistant also collected systematic direct observation (SDO) data regarding the same behavior and consequences. Results indicated adequate temporal reliability (≥.70) was attained for the adult attention and peer attention targets across the three ratings; in contrast, up to 8-18 data points would be needed to achieve adequate reliability across the remaining targets. Findings further suggested that while ISP-Function ratings of disruptive behavior, adult attention, and peer attention were moderately to highly correlated with SDO data, correlations were in the low range for the access to items/activities and escape/avoidance targets. Finally, analysis of difference scores showed that on average, mean ISP-Function scores fell within only 0.33 to 1.81 points of mean SDO scores (on the 0-10 DBR scale). Agreement coefficients indicative of exact score agreement were less consistent, suggesting accuracy ranged from poor to substantial. Results are promising, but future research is necessary to support applied ISP-Function use. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Behavior Rating Scale/standards , Child Behavior , Problem Behavior , Students , Behavior Observation Techniques , Child , Female , Humans , Male , Reproducibility of Results , School Teachers
5.
Sch Psychol ; 34(3): 261-270, 2019 May.
Article in English | MEDLINE | ID: mdl-30883158

ABSTRACT

The purposes of this study were twofold. The first was to use latent class analysis to identify groupings of students defined by the presence or absence of academic or behavioral risk. The second was to determine whether these groups differed across various dichotomous academic and behavioral outcomes (e.g., suspensions, office discipline referrals, statewide achievement test failure). Students (N = 1,488) were sampled from Grades 3-5. All students were screened for academic risk using AIMSweb Reading Curriculum-Based Measure and AIMSweb Mathematics Computation, and behavioral risk using the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS). Latent class analyses supported the fit of a three-class model, with resulting student classes defined as low-risk academic and behavior (Class 1), at-risk academic and high-risk behavior (Class 2), and at-risk math and behavior (Class 3). Logistic regression analyses indicated the classes demonstrated statistically significant differences statewide achievement scores, as well as suspensions. Further analysis indicated that the odds of all considered negative outcomes were higher for both groups characterized by risk (i.e., Classes 2 and 3). Negative outcomes were particularly likely for Class 2, with the odds of negative behavioral and academic outcomes being 6-15 and 112-169 times more likely, respectively. Results were taken to support an integrated approach to universal screening in schools, defined by the evaluation of both academic and behavioral risk. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Academic Performance , Behavior Rating Scale , Child Behavior , Problem Behavior , Child , Female , Humans , Male , Mass Screening , Midwestern United States , Risk , Schools
6.
Sch Psychol ; 34(5): 503-511, 2019 Sep.
Article in English | MEDLINE | ID: mdl-30589314

ABSTRACT

Universal screening is useful in the early identification of behavioral and emotional concerns, but teacher-related variance can potentially influence screening scores and resulting decisions. The current study examined the extent to which burnout and self-efficacy as teacher-level variables accounted for variance in universal screening scores. The study participants included 1,314 K-6 students and 56 elementary school teachers. Teachers completed the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) for each student in their classroom, while also completing rating scales regarding their personal self-efficacy and levels of burnout. Hierarchical linear modeling was employed to estimate the extent of teacher-related variance and whether burnout and self-efficacy accounted for this variance. Unconditional models indicated 12-30% of variance in screening scores was between teachers. Conditional models indicated teacher self-efficacy and the depersonalization component of teacher burnout were statistically significant predictors of Emotional Behavior and Total Behavior scores on the SAEBRS. Results further suggested that when combined, burnout and self-efficacy variables accounted for 7-30% of between-teacher variance in screening scores. Implications for practice and future research are discussed. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Behavior Rating Scale/statistics & numerical data , Burnout, Professional/epidemiology , Child Behavior , School Teachers/statistics & numerical data , Self Efficacy , Adult , Child , Child, Preschool , Female , Humans , Male , Models, Statistical
7.
Sch Psychol Q ; 34(1): 86-95, 2019 Jan.
Article in English | MEDLINE | ID: mdl-29911877

ABSTRACT

Research has supported the applied use of Direct Behavior Rating Single-Item Scale (DBR-SIS) targets of "academic engagement" and "disruptive behavior" for a range of purposes, including universal screening and progress monitoring. Though useful in evaluating social behavior and externalizing problems, these targets have limited utility in evaluating emotional behavior and internalizing problems. Thus, the primary purpose of this study was to support the initial development and validation of a novel DBR-SIS target of "unhappy," which was intended to tap into the specific construct of depression. A particular focus of this study was on the novel target's utility within universal screening. A secondary purpose was to further validate the aforementioned existing DBR-SIS targets. Within this study, 87 teachers rated 1,227 students across two measures (i.e., DBR-SIS and the Teacher Observation of Classroom Adaptation-Checklist [TOCA-C]) and time points (i.e., fall and spring). Correlational analyses supported the test-retest reliability of each DBR-SIS target, as well as its convergent and discriminant validity across concurrent and predictive comparisons. Receiver operating characteristic (ROC) curve analyses further supported (a) the overall diagnostic accuracy of each target (as indicated by the area under the curve [AUC] statistic), as well as (b) the selection of cut scores found to accurately differentiate at-risk and not at-risk students (as indicated by conditional probability statistics). A broader review of findings suggested that across the majority of analyses, the existing DBR-SIS targets outperformed the novel "unhappy" target. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Child Behavior/psychology , Depression/diagnosis , Depressive Disorder/diagnosis , Adolescent , Behavior Rating Scale , Child , Depression/psychology , Depressive Disorder/psychology , Emotions , Female , Humans , Male , Mass Screening , Reproducibility of Results , Students/psychology
8.
J Sch Psychol ; 68: 129-141, 2018 06.
Article in English | MEDLINE | ID: mdl-29861023

ABSTRACT

In accordance with an argument-based approach to validation, the purpose of the current study was to yield evidence relating to Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) score interpretation. Bifactor item response theory analyses were performed to examine SAEBRS item functioning. Structural equation modeling (SEM) was used to simultaneously evaluate intra- and inter-scale relationships, expressed through (a) a measurement model specifying a bifactor structure to SAEBRS items, and (b) a structural model specifying convergent and discriminant relations with an outcome measure (i.e., Behavioral and Emotional Screening System [BESS]). Finally, hierarchical omega coefficients were calculated in evaluating the model-based internal reliability of each SAEBRS scale. IRT analyses supported the adequate fit of the bifactor model, indicating items adequately discriminated moderate and high-risk students. SEM results further supported the fit of the latent bifactor measurement model, yielding superior fit relative to alternative models (i.e., unidimensional and correlated factors). SEM analyses also indicated the latent SAEBRS-Total Behavior factor was a statistically significant predictor of all BESS subscales, the SAEBRS-Academic Behavior predicted BESS Adaptive Skills subscales, and the SAEBRS-Emotional Behavior predicted the BESS Internalizing Problems subscale. Hierarchical omega coefficients indicated the SAEBRS-Total Behavior factor was associated with adequate reliability. In contrast, after accounting for the total scale, each of the SAEBRS subscales was associated with somewhat limited reliability, suggesting variability in these scores is largely driven by the Total Behavior scale. Implications for practice and future research are discussed.


Subject(s)
Child Behavior Disorders/diagnosis , Emotions/physiology , Problem Behavior/psychology , Students/psychology , Child , Child Behavior Disorders/psychology , Female , Humans , Male , Mass Screening , Psychometrics , Reproducibility of Results , Risk Assessment , Schools
9.
Sch Psychol Q ; 33(4): 582-589, 2018 Dec.
Article in English | MEDLINE | ID: mdl-29792498

ABSTRACT

The purpose of this diagnostic accuracy study was to evaluate the sensitivity and specificity (among other indicators) of three universal screening approaches, including the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS), a SAEBRS-based teacher nomination tool, and a multiple gating procedure (MGP). Each screening approach was compared to the BASC-2 Behavioral and Emotional Screening System (BESS), which served as a criterion indicator of student social-emotional and behavioral risk. All data were collected in a concurrent fashion. Participants included 704 students (47.7% female) from four elementary schools within the Midwestern United States (21.6% were at risk per the BESS). Findings yielded support for the SAEBRS, with sensitivity = .93 (95% confidence interval [.89-.97]), specificity = .91 (.89-.93), and correct classification = .92. Findings further supported the MGP, which yielded sensitivity = .81 (.74-.87), specificity = .93 (.91-.95), and correct classification = .91. In contrast, the teacher nomination tool yielded questionable levels of diagnostic accuracy (sensitivity = .86 [.80-.91], specificity = .74 [.70-.78], and correct classification = .76). Overall, findings were particularly supportive of SAEBRS diagnostic accuracy, suggesting the MGP might also serve as an acceptable approach to universal screening. Other implications for practice and directions for future research are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).


Subject(s)
Child Behavior Disorders/diagnosis , Child Behavior/psychology , Mass Screening/methods , Child , Child Behavior Disorders/psychology , Female , Humans , Male , Psychometrics , Risk Assessment , School Health Services , Schools , Sensitivity and Specificity , Students/psychology
10.
Sch Psychol Q ; 33(1): 155-159, 2018 03.
Article in English | MEDLINE | ID: mdl-29629792

ABSTRACT

The purpose of this study was to support the identification of Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) cut scores that could be used to detect high-risk students. Teachers rated students across two time points (Time 1 n = 1,242 students; Time 2 n = 704) using the SAEBRS and the Behavioral and Emotional Screening System (BESS), the latter of which served as the criterion measure. Exploratory receiver operating characteristic (ROC) curve analyses of Time 1 data detected cut scores evidencing optimal levels of specificity and borderline-to-optimal levels of sensitivity. Cross-validation analyses of Time 2 data confirmed the performance of these cut scores, with all but one scale evidencing similar performance. Findings are considered particularly promising for the SAEBRS Total Behavior scale in detecting high-risk students. (PsycINFO Database Record


Subject(s)
Behavior Rating Scale , Child Behavior Disorders/diagnosis , School Teachers , Schools , Child , Female , Humans , Male , Risk
11.
J Sch Psychol ; 66: 4-10, 2018 02.
Article in English | MEDLINE | ID: mdl-29429494

ABSTRACT

School psychology research and practice has considerable room for growth to go beyond "did an intervention work?" to "what intervention worked for whom and how did it work?" The latter question reflects a more precise understanding of intervention, and involves strategic efforts to enhance the precision of services students with academic, behavioral, emotional, or physical health problems receive to enhance the degree to which interventions are appropriately tailored to and produce benefit for individual students. The purpose of this special issue is to advance the notion and science of precision education, which is defined as an approach to research and practice that is concerned with tailoring preventive and intervention practices to individuals based on the best available evidence. This introductory article provides context for the special issue by discussing reasons why precision education is needed, providing definitions/descriptions of precision education research, and outlining opportunities to advance the science of precision education. Six empirical studies and one methodological-oriented article were compiled to provide examples of the breadth of research that falls under precision education. Although each of the article focuses on students with different needs (literacy deficits, math deficits, emotional and behavior problems, and intellectual disability), there is a common thread that binds them together, and that is each one captures the heterogeneity among students with particular problems or deficits and highlights the need to select and deliver more precise interventions to optimize student outcomes.


Subject(s)
Education/standards , Students/psychology , Education/methods , Humans , Psychology, Educational
12.
Behav Modif ; 42(1): 84-107, 2018 01.
Article in English | MEDLINE | ID: mdl-29199448

ABSTRACT

Many populations served by special education, including those identified with autism, emotional impairments, or students identified as not ready to learn, experience social competence deficits. The Social Competence Intervention-Adolescents' (SCI-A) methods, content, and materials were designed to be maximally pertinent and applicable to the social competence needs of early adolescents (i.e., age 11-14 years) identified as having scholastic potential but experiencing significant social competence deficits. Given the importance of establishing intervention efficacy, the current paper highlights the results from a four-year cluster randomized trial (CRT) to examine the efficacy of SCI-A (n = 146 students) relative to Business As Usual (n = 123 students) school-based programming. Educational personnel delivered all programming including both intervention and BAU conditions. Student functioning was assessed across multiple time points, including pre-, mid-, and post-intervention. Outcomes of interest included social competence behaviors, which were assessed via both systematic direct observation and teacher behavior rating scales. Data were analyzed using multilevel models, with students nested within schools. Results suggested after controlling for baseline behavior and student IQ, BAU and SCI students differed to a statistically significant degree across multiple indicators of social performance. Further consideration of standardized mean difference effect sizes revealed these between-group differences to be representative of medium effects (d > .50). Such outcomes pertained to student (a) awareness of social cues and information, and (b) capacity to appropriately interact with teachers and peers. The need for additional power and the investigation of potential moderators and mediators of social competence effectiveness are explored.


Subject(s)
Affective Symptoms/therapy , Aptitude/physiology , Autism Spectrum Disorder/therapy , Child Behavior Disorders/therapy , Outcome Assessment, Health Care , Psychotherapy/methods , Schools , Social Skills , Adolescent , Affective Symptoms/physiopathology , Autism Spectrum Disorder/physiopathology , Child , Child Behavior Disorders/physiopathology , Female , Humans , Male
13.
Sch Psychol Q ; 33(1): 83-93, 2018 03.
Article in English | MEDLINE | ID: mdl-28604023

ABSTRACT

The purpose of this study was to evaluate the concurrent validity, sensitivity to change, and teacher acceptability of Direct Behavior Rating single-item scales (DBR-SIS), a brief progress monitoring measure designed to assess student behavioral change in response to intervention. Twenty-four elementary teacher-student dyads implemented a daily report card intervention to promote positive student behavior during prespecified classroom activities. During both baseline and intervention, teachers completed DBR-SIS ratings of 2 target behaviors (i.e., Academic Engagement, Disruptive Behavior) whereas research assistants collected systematic direct observation (SDO) data in relation to the same behaviors. Five change metrics (i.e., absolute change, percent of change from baseline, improvement rate difference, Tau-U, and standardized mean difference; Gresham, 2005) were calculated for both DBR-SIS and SDO data, yielding estimates of the change in student behavior in response to intervention. Mean DBR-SIS scores were predominantly moderately to highly correlated with SDO data within both baseline and intervention, demonstrating evidence of the former's concurrent validity. DBR-SIS change metrics were also significantly correlated with SDO change metrics for both Disruptive Behavior and Academic Engagement, yielding evidence of the former's sensitivity to change. In addition, teacher Usage Rating Profile-Assessment (URP-A) ratings indicated they found DBR-SIS to be acceptable and usable. Implications for practice, study limitations, and areas of future research are discussed. (PsycINFO Database Record


Subject(s)
Behavior Rating Scale/standards , Child Behavior Disorders/therapy , Child Behavior , Outcome Assessment, Health Care/standards , School Teachers , Adolescent , Adult , Child , Female , Humans , Male , Reproducibility of Results , Sensitivity and Specificity
14.
J Sch Psychol ; 60: 65-82, 2017 02.
Article in English | MEDLINE | ID: mdl-28164800

ABSTRACT

Evidence-based interventions (EBIs) have become a central component of school psychology research and practice, but EBIs are dependent upon the availability and use of evidence-based assessments (EBAs) with diverse student populations. Multi-group confirmatory factor analysis (MG-CFA) is an analytical tool that can be used to examine the validity and measurement equivalence/invariance of scores across diverse groups. The objective of this article is to provide a conceptual and procedural overview of categorical MG-CFA, as well as an illustrated example based on data from the Social and Academic Behavior Risk Screener (SABRS) - a tool designed for use in school-based interventions. This article serves as a non-technical primer on the topic of MG-CFA with ordinal (rating scale) data and does so through the framework of examining equivalence of measures used for EBIs within multi-tiered models - an understudied topic. To go along with the illustrated example, we have provided supplementary files that include sample data, Mplus input code, and an annotated guide for understanding the input code (http://dx.doi.org/10.1016/j.jsp.2016.11.002). Data needed to reproduce analyses in this article are available as supplemental materials (online only) in the Appendix of this article.


Subject(s)
Adolescent Behavior , Child Behavior , Factor Analysis, Statistical , Psychology, Educational/methods , Psychometrics/methods , Social Behavior , Adolescent , Child , Humans
15.
Sch Psychol Q ; 32(2): 240-253, 2017 06.
Article in English | MEDLINE | ID: mdl-27243239

ABSTRACT

The purpose of this investigation was to evaluate the utility of Direct Behavior Rating Single Item Scale (DBR-SIS) methodology in collecting functional behavior assessment data. Specific questions of interest pertained to the evaluation of the accuracy of brief DBR-SIS ratings of behavioral consequences and determination of the type of training necessary to support such accuracy. Undergraduate student participants (N = 213; 62.0% male; 62.4% White) viewed video clips of students in a classroom setting, and then rated both disruptive behavior and 4 consequences of that behavior (i.e., adult attention, peer attention, escape/avoidance, and access to tangibles/activities). Results indicated training with performance feedback was necessary to support the generation of accurate disruptive behavior and consequence ratings. Participants receiving such support outperformed students in training-only, pretest-posttest, and posttest-only groups for disruptive behavior and all 4 DBR-SIS consequence targets. Future directions for research and implications for practice are discussed, including how teacher ratings may be collected along with other forms of assessment (e.g., progress monitoring) within an efficient Tier 2 assessment model. (PsycINFO Database Record


Subject(s)
Behavior Rating Scale , Child Behavior/psychology , Peer Group , Problem Behavior/psychology , Students/psychology , Attention/physiology , Child , Female , Humans , Male , Schools
16.
J Sch Psychol ; 58: 21-39, 2016 10.
Article in English | MEDLINE | ID: mdl-27586068

ABSTRACT

The primary purposes of this investigation were to (a) continue a line of research examining the psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener - Teacher Rating Scale (SAEBRS-TRS), and (b) develop and preliminarily evaluate the diagnostic accuracy of a novel multiple gating procedure based on teacher nomination and the SAEBRS-TRS. Two studies were conducted with elementary and middle school student samples across two separate geographic locations. Study 1 (n=864 students) results supported SAEBRS-TRS defensibility, revealing acceptable to optimal levels of internal consistency reliability, concurrent validity, and diagnostic accuracy. Findings were promising for a combined multiple gating procedure, which demonstrated acceptable levels of sensitivity and specificity. Study 2 (n=1534 students), which replicated Study 1 procedures, further supported the SAEBRS-TRS' psychometric defensibility in terms of reliability, validity, and diagnostic accuracy. Despite the incorporation of revisions intended to promote sensitivity levels, the combined multiple gating procedure's diagnostic accuracy was similar to that found in Study 1. Taken together, results build upon prior research in support of the applied use of the SAEBRS-TRS, as well as justify future research regarding a SAEBRS-based multiple gating procedure. Implications for practice and study limitations are discussed.


Subject(s)
Adolescent Behavior , Child Behavior Disorders/diagnosis , Child Behavior , Psychiatric Status Rating Scales/standards , Psychometrics/instrumentation , Adolescent , Child , Female , Humans , Male , Reproducibility of Results , Risk
17.
Sch Psychol Q ; 31(3): 431-442, 2016 09.
Article in English | MEDLINE | ID: mdl-26524424

ABSTRACT

The purpose of this investigation was to evaluate the reliability of Direct Behavior Ratings-Social Competence (DBR-SC) ratings. Participants included 60 students identified as possessing deficits in social competence, as well as their 23 classroom teachers. Teachers used DBR-SC to complete ratings of 5 student behaviors within the general education setting on a daily basis across approximately 5 months. During this time, each student was assigned to 1 of 2 intervention conditions, including the Social Competence Intervention-Adolescent (SCI-A) and a business-as-usual (BAU) intervention. Ratings were collected across 3 intervention phases, including pre-, mid-, and postintervention. Results suggested DBR-SC ratings were highly consistent across time within each student, with reliability coefficients predominantly falling in the .80 and .90 ranges. Findings further indicated such levels of reliability could be achieved with only a small number of ratings, with estimates varying between 2 and 10 data points. Group comparison analyses further suggested the reliability of DBR-SC ratings increased over time, such that student behavior became more consistent throughout the intervention period. Furthermore, analyses revealed that for 2 of the 5 DBR-SC behavior targets, the increase in reliability over time was moderated by intervention grouping, with students receiving SCI-A demonstrating greater increases in reliability relative to those in the BAU group. Limitations of the investigation as well as directions for future research are discussed herein. (PsycINFO Database Record


Subject(s)
Behavior Rating Scale/standards , Social Behavior Disorders/diagnosis , Social Skills , Child , Female , Humans , Male , Reproducibility of Results , Social Behavior Disorders/psychology , Students
18.
Psychol Assess ; 28(10): 1265-1275, 2016 10.
Article in English | MEDLINE | ID: mdl-26619092

ABSTRACT

Universal screening for mental health has gained prominence in schools with the adoption of multitiered systems of support. However, there is a general lack of brief, psychometrically defensible instruments that assess emotional and behavioral risk. This study employed a multilevel, confirmatory bifactor analysis to evaluate the factor structure of a novel screening instrument-the Social, Academic, and Emotional Behavioral Risk Screener (SAEBRS; Kilgus & von der Embse, 2014)-examining the structure at the student (within) and teacher or rater (between) levels. Item response theory (IRT) analyses were then used to examine the functioning of 2 existing factors, social risk and academic risk, in addition to a newly introduced third factor, emotional risk, within a sample of 834 elementary and middle school students. Results indicated good fit of a bifactor model including the addition of the new Emotional Behavior subscale. IRT analyses suggested strong item-level discriminative properties (a > 1.0) for 17 of the 19 SAEBRS items and indicated that scale precision was greatest within the low to moderate range of each respective dimension (social, academic, and behavioral risk). Overall, the findings provide support for the use of the SAEBRS as a screener for mental health-related concerns. Implications for model interpretation and model use are discussed. (PsycINFO Database Record


Subject(s)
Mass Screening/methods , Mental Disorders/diagnosis , Psychiatric Status Rating Scales , School Health Services , Adolescent , Child , Factor Analysis, Statistical , Female , Humans , Male , Psychometrics , Reproducibility of Results , Risk Assessment , Schools , Students/psychology
19.
Sch Psychol Q ; 30(2): 159-165, 2015 Jun.
Article in English | MEDLINE | ID: mdl-26009938

ABSTRACT

This special topic section features research regarding practices that will support mental health service delivery within a school-based multitiered framework. The articles include data and discussions regarding the evaluation of universal, targeted, or intensive intervention addressing mental health concerns and assessment tools intended for use in screening, progress monitoring, or problem identification. The featured interventions and assessment practices are suitable for use within a service delivery model that prioritizes ecological theory, data-based decision making, and problem solving logic. Each article includes a conceptualization of how the intervention/assessment of interest fits into a school-based multitiered framework and information about feasibility and utility of the practice in school-based settings. These articles highlight the use of mental health intervention and assessment within a multitiered problem-solving framework, and will hopefully stimulate interest in and further scholarship on this important topic.


Subject(s)
Mental Disorders/therapy , Mental Health Services/organization & administration , Clinical Decision-Making , Delivery of Health Care/organization & administration , Early Diagnosis , Health Policy , Humans , Internal-External Control , Mental Disorders/diagnosis , Mental Health , Risk Factors
20.
Sch Psychol Q ; 30(3): 335-352, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25264747

ABSTRACT

The purpose of this investigation was to evaluate the models for interpretation and use that serve as the foundation of an interpretation/use argument for the Social and Academic Behavior Risk Screener (SABRS). The SABRS was completed by 34 teachers with regard to 488 students in a Midwestern high school during the winter portion of the academic year. Confirmatory factor analysis supported interpretation of SABRS data, suggesting the fit of a bifactor model specifying 1 broad factor (General Behavior) and 2 narrow factors (Social Behavior [SB] and Academic Behavior [AB]). The interpretive model was further supported by analyses indicative of the internal consistency and interrater reliability of scores from each factor. In addition, latent profile analyses indicated the adequate fit of the proposed 4-profile SABRS model for use. When cross-referenced with SABRS cut scores identified via previous work, results revealed students could be categorized as (a) not at-risk on both SB and AB, (b) at-risk on SB but not on AB, (c) at-risk on AB but not on SB, or (d) at-risk on both SB and AB. Taken together, results contribute to growing evidence supporting the SABRS within universal screening. Limitations, implications for practice, and future directions for research are discussed herein.


Subject(s)
Behavior Rating Scale/standards , Child Behavior Disorders/diagnosis , Social Behavior Disorders/diagnosis , Child , Early Diagnosis , Humans , Models, Psychological , Psychiatric Status Rating Scales , Risk Assessment/methods
SELECTION OF CITATIONS
SEARCH DETAIL