Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Behav Res Methods ; 55(7): 3494-3503, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36223007

RESUMO

Currently, the design standards for single-case experimental designs (SCEDs) are based on validity considerations as prescribed by the What Works Clearinghouse. However, there is a need for design considerations such as power based on statistical analyses. We compute and derive power using computations for (AB)k designs with multiple cases which are common in SCEDs. Our computations show that effect size has the maximum impact on power followed by the number of subjects and then the number of phase reversals. An effect size of 0.75 or higher, at least one set of phase reversals (i.e., where k > 1), and at least three subjects showed high power. The latter two conditions agree with current standards about either having at least an ABAB design or a multiple baseline design with three subjects to meet design standards. An effect size of 0.75 or higher is not uncommon in SCEDs either. Autocorrelations, the number of time-points per phase, and intraclass correlations had a smaller but non-negligible impact on power. In sum, power analyses in the present study show that conditions to meet power requirements are not unreasonable in SCEDs. The software code to compute power is available on GitHub for the use of the reader.


Assuntos
Projetos de Pesquisa , Humanos
2.
Neuropsychol Rehabil ; 27(1): 1-15, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27499422

RESUMO

We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016 ) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts. SCIENTIFIC ABSTRACT Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012 ). Many such guidelines exist and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008 ) provides suitable guidance for reporting between-groups intervention studies in the behavioural sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015 ; Vohra et al., 2015 ), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioural sciences. We developed the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016 ) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.


Assuntos
Terapia Comportamental , Lista de Checagem , Guias como Assunto , Editoração , Projetos de Pesquisa , Relatório de Pesquisa/normas , Humanos , Revisão da Pesquisa por Pares/normas
3.
Am J Occup Ther ; 70(4): 7004320010p1-11, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27294998

RESUMO

Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist, and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between-groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioral sciences. We developed the Single-Case Reporting guideline In Behavioral interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.


Assuntos
Ciências do Comportamento/métodos , Lista de Checagem , Guias como Assunto , Editoração/normas , Projetos de Pesquisa , Relatório de Pesquisa/normas , Técnica Delphi , Humanos
4.
Neuropsychol Rehabil ; 24(3-4): 528-53, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-23862576

RESUMO

We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.


Assuntos
Projetos de Pesquisa/estatística & dados numéricos , Humanos , Metanálise como Assunto
5.
Behav Res Methods ; 45(3): 813-21, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23239070

RESUMO

Researchers in the single-case design tradition have debated the size and importance of the observed autocorrelations in those designs. All of the past estimates of the autocorrelation in that literature have taken the observed autocorrelation estimates as the data to be used in the debate. However, estimates of the autocorrelation are subject to great sampling error when the design has a small number of time points, as is typically the situation in single-case designs. Thus, a given observed autocorrelation may greatly over- or underestimate the corresponding population parameter. This article presents Bayesian estimates of the autocorrelation that greatly reduce the role of sampling error, as compared to past estimators. Simpler empirical Bayes estimates are presented first, in order to illustrate the fundamental notions of autocorrelation sampling error and shrinkage, followed by fully Bayesian estimates, and the difference between the two is explained. Scripts to do the analyses are available as supplemental materials. The analyses are illustrated using two examples from the single-case design literature. Bayesian estimation warrants wider use, not only in debates about the size of autocorrelations, but also in statistical methods that require an independent estimate of the autocorrelation to analyze the data.


Assuntos
Teorema de Bayes , Modelos Estatísticos , Interpretação Estatística de Dados , Humanos , Análise de Regressão , Projetos de Pesquisa , Tamanho da Amostra , Viés de Seleção
6.
Cogn Behav Ther ; 40(1): 15-33, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21337212

RESUMO

It is essential that outcome research permit clear conclusions to be drawn about the efficacy of interventions. The common practice of nesting therapists within conditions can pose important methodological challenges that affect interpretation, particularly if the study is not powered to account for the nested design. An obstacle to the optimal design of these studies is the lack of data about the intraclass correlation coefficient (ICC), which measures the statistical dependencies introduced by nesting. To begin the development of a public database of ICC estimates, the authors investigated ICCs for a variety outcomes reported in 20 psychotherapy outcome studies. The magnitude of the 495 ICC estimates varied widely across measures and studies. The authors provide recommendations regarding how to select and aggregate ICC estimates for power calculations and show how researchers can use ICC estimates to choose the number of patients and therapists that will optimize power. Attention to these recommendations will strengthen the validity of inferences drawn from psychotherapy studies that nest therapists within conditions.


Assuntos
Ensaios Clínicos como Assunto/métodos , Psicoterapia/métodos , Projetos de Pesquisa , Humanos , Avaliação de Resultados em Cuidados de Saúde , Reprodutibilidade dos Testes , Resultado do Tratamento
7.
Behav Res Methods ; 43(4): 971-80, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21656107

RESUMO

This article reports the results of a study that located, digitized, and coded all 809 single-case designs appearing in 113 studies in the year 2008 in 21 journals in a variety of fields in psychology and education. Coded variables included the specific kind of design, number of cases per study, number of outcomes, data points and phases per case, and autocorrelations for each case. Although studies of the effects of interventions are a minority in these journals, within that category, single-case designs are used more frequently than randomized or nonrandomized experiments. The modal study uses a multiple-baseline design with 20 data points for each of three or four cases, where the aim of the intervention is to increase the frequency of a desired behavior; but these characteristics vary widely over studies. The average autocorrelation is near to but significantly different from zero; but autocorrelations are significantly heterogeneous. The results have implications for the contributions of single-case designs to evidence-based practice and suggest a number of future research directions.


Assuntos
Projetos de Pesquisa , Humanos
10.
Am J Public Health ; 98(8): 1418-24, 2008 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-18556603

RESUMO

OBJECTIVES: We reviewed published individually randomized group treatment (IRGT) trials to assess researchers' awareness of within-group correlation and determine whether appropriate design and analytic methods were used to test for treatment effectiveness. METHODS: We assessed sample size and analytic methods in IRGT trials published in 6 public health and behavioral health journals between 2002 and 2006. RESULTS: Our review included 34 articles; in 32 (94.1%) of these articles, inappropriate analytic methods were used. In only 1 article did the researchers claim that expected intraclass correlations (ICCs) were taken into account in sample size estimation; in most articles, sample size was not mentioned or ICCs were ignored in the reported calculations. CONCLUSIONS: Trials in which individuals are randomly assigned to study conditions and treatments administered in groups may induce within-group correlation, violating the assumption of independence underlying commonly used statistical methods. Methods that take expected ICCs into account should be used in reexamining past studies and planning future studies to ensure that interventions are not judged effective solely on the basis of statistical artifacts. We strongly encourage investigators to report ICCs from IRGT trials and describe study characteristics clearly to aid these efforts.


Assuntos
Biometria/métodos , Interpretação Estatística de Dados , Ensaios Clínicos Controlados Aleatórios como Assunto/métodos , Projetos de Pesquisa , Humanos , Prática de Saúde Pública , Tamanho da Amostra
11.
Dev Neurorehabil ; 21(4): 266-278, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-26809945

RESUMO

OBJECTIVE: This paper demonstrates how to conduct a meta-analysis that includes both between-group and single-case design (SCD) studies. The example studies whether choice-making interventions decrease challenging behaviors performed by people with disabilities. METHODS: We used a between-case d-statistic to conduct a meta-analysis of 15 between-group and SCD studies of 70 people with a disability, who received a choice intervention or control. We used robust variance estimation to adjust for dependencies caused by multiple effect sizes per study, and conducted moderator, sensitivity, influence, and publication bias analyses. RESULTS: The random-effects average was d = 1.02 (standard error of 0.168), so the 95% confidence interval (CI) suggests choice-making reduces challenging behaviors by 0.65 to 1.38 standard deviations. Studies that provided choice training produced a significantly larger intervention effect. CONCLUSION: Choice-making reduces challenging behaviors performed by people with disabilities.


Assuntos
Metanálise como Assunto , Reabilitação Neurológica/métodos , Pessoas com Deficiência/psicologia , Humanos , Reabilitação Neurológica/normas , Projetos de Pesquisa/normas
12.
Eval Rev ; 42(2): 248-280, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-30060688

RESUMO

BACKGROUND: Randomized experiments yield unbiased estimates of treatment effect, but such experiments are not always feasible. So researchers have searched for conditions under which randomized and nonrandomized experiments can yield the same answer. This search requires well-justified and informative correspondence criteria, that is, criteria by which we can judge if the results from an appropriately adjusted nonrandomized experiment well-approximate results from randomized experiments. Past criteria have relied exclusively on frequentist statistics, using criteria such as whether results agree in sign or statistical significance or whether results differ significantly from each other. OBJECTIVES: In this article, we show how Bayesian correspondence criteria offer more varied, nuanced, and informative answers than those from frequentist approaches. RESEARCH DESIGN: We describe the conceptual bases of Bayesian correspondence criteria and then illustrate many possibilities using an example that compares results from a randomized experiment to results from a parallel nonequivalent comparison group experiment in which participants could choose their condition. RESULTS: Results suggest that, in this case, the quasi-experiment reasonably approximated the randomized experiment. CONCLUSIONS: We conclude with a discussion of the advantages (computation of relevant quantities, interpretation, and estimation of quantities of interest for policy), disadvantages, and limitations of Bayesian correspondence criteria. We believe that in most circumstances, the advantages of Bayesian approaches far outweigh the disadvantages.


Assuntos
Teorema de Bayes , Pesquisa Empírica , Estudos de Avaliação como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto , Viés , Pontuação de Propensão , Projetos de Pesquisa
13.
Psychol Bull ; 132(4): 524-8; discussion 533-7, 2006 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-16822163

RESUMO

The H. Bösch, F. Steinkamp, and E. Boller meta-analysis reaches mixed and cautious conclusions about the possibility of psychokinesis. The authors argue that, for both methodological and philosophical reasons, it is nearly impossible to draw any conclusions from this body of research. The authors do not agree that any significant effect at all, no matter how small, is fundamentally important (Bösch et al., 2006, p. 517), and they suggest that psychokinesis researchers focus either on producing larger effects or on specifying the conditions under which they would be willing to accept the null hypothesis.


Assuntos
Cinese , Processos Mentais , Teoria Psicológica , Humanos , Matemática , Parapsicologia/métodos , Psicologia/métodos
14.
J Clin Epidemiol ; 76: 82-8, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27079848

RESUMO

OBJECTIVES: We reanalyzed data from a previous randomized crossover design that administered high or low doses of intravenous immunoglobulin (IgG) to 12 patients with hypogammaglobulinaemia over 12 time points, with crossover after time 6. The objective was to see if results corresponded when analyzed as a set of single-case experimental designs vs. as a usual randomized controlled trial (RCT). STUDY DESIGN AND SETTINGS: Two blinded statisticians independently analyzed results. One analyzed the RCT comparing mean outcomes of group A (high dose IgG) to group B (low dose IgG) at the usual trial end point (time 6 in this case). The other analyzed all 12 time points for the group B patients as six single-case experimental designs analyzed together in a Bayesian nonlinear framework. RESULTS: In the randomized trial, group A [M = 794.93; standard deviation (SD) = 90.48] had significantly higher serum IgG levels at time six than group B (M = 283.89; SD = 71.10) (t = 10.88; df = 10; P < 0.001), yielding a mean difference of MD = 511.05 [standard error (SE) = 46.98]. For the single-case experimental designs, the effect from an intrinsically nonlinear regression was also significant and comparable in size with overlapping confidence intervals: MD = 495.00, SE = 54.41, and t = 495.00/54.41 = 9.10. Subsequent exploratory analyses indicated that how trend was modeled made a difference to these conclusions. CONCLUSIONS: The results of single-case experimental designs accurately approximated results from an RCT, although more work is needed to understand the conditions under which this holds.


Assuntos
Agamaglobulinemia/tratamento farmacológico , Pesquisa Biomédica/métodos , Imunoglobulinas/administração & dosagem , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , Estatística como Assunto/métodos , Administração Intravenosa , Teorema de Bayes , Relação Dose-Resposta a Droga , Humanos , Fatores de Tempo
15.
J Appl Behav Anal ; 49(3): 656-73, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27174301

RESUMO

The published literature often underrepresents studies that do not find evidence for a treatment effect; this is often called publication bias. Literature reviews that fail to include such studies may overestimate the size of an effect. Only a few studies have examined publication bias in single-case design (SCD) research, but those studies suggest that publication bias may occur. This study surveyed SCD researchers about publication preferences in response to simulated SCD results that show a range of small to large effects. Results suggest that SCD researchers are more likely to submit manuscripts that show large effects for publication and are more likely to recommend acceptance of manuscripts that show large effects when they act as a reviewer. A nontrivial minority of SCD researchers (4% to 15%) would drop 1 or 2 cases from the study if the effect size is small and then submit for publication. This article ends with a discussion of implications for publication practices in SCD research.


Assuntos
Viés de Publicação , Projetos de Pesquisa , Pesquisadores/psicologia , Humanos , Pesquisadores/estatística & dados numéricos , Inquéritos e Questionários
16.
J Clin Epidemiol ; 76: 18-46, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-26272791

RESUMO

N-of-1 trials are a useful tool for clinicians who want to determine the effectiveness of a treatment in a particular individual. The reporting of N-of-1 trials has been variable and incomplete, hindering their usefulness in clinical decision making and by future researchers. This document presents the CONSORT (Consolidated Standards of Reporting Trials) extension for N-of-1 trials (CENT 2015). CENT 2015 extends the CONSORT 2010 guidance to facilitate the preparation and appraisal of reports of an individual N-of-1 trial or a series of prospectively planned, multiple, crossover N-of-1 trials. CENT 2015 elaborates on 14 items of the CONSORT 2010 checklist, totalling 25 checklist items (44 sub-items), and recommends diagrams to help authors document the progress of one participant through a trial or more than one participant through a trial or series of trials, as applicable. Examples of good reporting and evidence based rationale for CENT 2015 checklist items are provided.


Assuntos
Pesquisa Biomédica/normas , Ensaios Clínicos como Assunto/normas , Guias como Assunto , Editoração/normas , Projetos de Pesquisa/normas , Relatório de Pesquisa/normas , Terminologia como Assunto , Humanos
17.
Evid Based Commun Assess Interv ; 10(1): 44-58, 2016 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-27499802

RESUMO

Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between-groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single case research used in the behavioral sciences. We developed the Single Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need. This statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 explanation and elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.

18.
J Sch Psychol ; 56: 133-42, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-27268573

RESUMO

UNLABELLED: We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts. SCIENTIFIC ABSTRACT: Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between-groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioral sciences. We developed the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. Supplemental materials: http://dx.doi.org/10.1037/arc0000026.supp.


Assuntos
Pesquisa Comportamental/normas , Guias como Assunto/normas , Projetos de Pesquisa/normas , Humanos
19.
Aphasiology ; 30(7): 862-876, 2016 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-27279674

RESUMO

We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts.

20.
Phys Ther ; 96(7): e1-e10, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27371692

RESUMO

UNLABELLED: We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts. SCIENTIFIC ABSTRACT: Reporting guidelines, such as the Consolidated Standards of Reporting Trials (CONSORT) Statement, improve the reporting of research in the medical literature (Turner et al., 2012). Many such guidelines exist and the CONSORT Extension to Nonpharmacological Trials (Boutron et al., 2008) provides suitable guidance for reporting between-groups intervention studies in the behavioral sciences. The CONSORT Extension for N-of-1 Trials (CENT 2015) was developed for multiple crossover trials with single individuals in the medical sciences (Shamseer et al., 2015; Vohra et al., 2015), but there is no reporting guideline in the CONSORT tradition for single-case research used in the behavioral sciences. We developed the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016 to meet this need. This Statement article describes the methodology of the development of the SCRIBE 2016, along with the outcome of 2 Delphi surveys and a consensus meeting of experts. We present the resulting 26-item SCRIBE 2016 checklist. The article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated.


Assuntos
Terapia Comportamental , Lista de Checagem , Técnica Delphi , Guias como Assunto , Projetos de Pesquisa , Relatório de Pesquisa/normas , Humanos , Revisão da Pesquisa por Pares/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA