RESUMEN
BACKGROUND: The purpose of the study was to examine the impact of computerized cognitive behavior therapy (CBT) self-help treatment for obsessive-compulsive disorder (OCD) (BT Steps) both alone and when supported by coaching from either a lay non-therapist coach or an experienced CBT therapist. METHODS: Eighty-seven subjects with clinically significant OCD were recruited through newspaper ads and randomly assigned to receive 12 weeks of treatment with either BT Steps alone (n = 28), BT Steps with non-therapist coaching (n = 28), or BT Steps with CBT therapist coaching (n = 31). Subjects worked on BT Steps at their own pace. Subjects receiving BT Steps alone received a welcome call from the project manager. Subjects randomized to either of the coaching arms received regularly scheduled weekly phone calls for coaching, encouragement, and support. No formal therapy was provided by the coaches; thus, both lay and CBT coaches completed the same tasks. RESULTS: All three treatment arms showed a significant reduction in Yale-Brown Obsessive Compulsive Scale (YBOCS) scores, with mean (SD) changes of 6.5 (5.7), 7.1 (6.1), and 6.5 (6.1) for the no coaching, lay coaching, and therapist coaching arms, respectively (all p's < .001). These represent effect sizes of 1.16, 1.41, and 1.12, respectively. No significant differences were found between treatment arms on YBOCS change scores, F(2) = 0.10, p = .904, or number of exposures sessions done (F(2) = 0.033, p = .967). When asked which method of therapy (computer vs. clinician) they preferred, 48% said computer, 33% said face-to-face therapy, and 19% had no preference. CONCLUSIONS: Results support the use of online self-help for the treatment of moderate OCD. The addition of coaching by either a lay coach or a CBT therapist coach did not significantly improve outcomes.
RESUMEN
BACKGROUND: Good interrater reliability is essential to minimize error variance and improve study power. Reasons why raters differ in scoring the same patient include information variance (different information obtained because of asking different questions), observation variance (the same information is obtained, but raters differ in what they notice and remember), interpretation variance (differences in the significance attached to what is observed), criterion variance (different criteria used to score items), and subject variance (true differences in the subject). We videotaped and transcribed 30 pairs of interviews to examine the most common sources of rater unreliability. METHOD: Thirty patients who experienced depression were independently interviewed by 2 different raters on the same day. Raters provided rationales for their scoring, and independent assessors reviewed the rationales, the interview transcripts, and the videotapes to code the main reason for each discrepancy. One third of the interviews were conducted by raters who had not administered the Hamilton Depression Rating Scale before; one third, by raters who were experienced but not calibrated; and one third, by experienced and calibrated raters. RESULTS: Experienced and calibrated raters had the highest interrater reliability (intraclass correlation [ICC]; r = 0.93) followed by inexperienced raters (r = 0.77) and experienced but uncalibrated raters (r = 0.55). The most common reason for disagreement was interpretation variance (39%), followed by information variance (30%), criterion variance (27%), and observation variance (4%). Experienced and calibrated raters had significantly less criterion variance than the other cohorts (P = 0.001). CONCLUSIONS: Reasons for disagreement varied by level of experience and calibration. Experienced and uncalibrated raters should focus on establishing common conventions, whereas experienced and calibrated raters should focus on fine tuning judgment calls on different thresholds of symptoms. Calibration training seems to improve reliability over experience alone. Experienced raters without cohort calibration had lower reliability than inexperienced raters.