Your browser doesn't support javascript.
loading
Development and initial psychometric properties of the Research Complexity Index.
Norful, Allison A; Capili, Bernadette; Kovner, Christine; Jarrín, Olga F; Viera, Laura; McIntosh, Scott; Attia, Jacqueline; Adams, Bridget; Swartz, Kitt; Brown, Ashley; Barton-Burke, Margaret.
Afiliación
  • Norful AA; Columbia University School of Nursing, New York, NY, USA.
  • Capili B; Rockefeller University, New York, NY, USA.
  • Kovner C; New York University Rory Meyers College of Nursing, New York, NY, USA.
  • Jarrín OF; Rutgers The State University of New Jersey, New Brunswick, NJ, USA.
  • Viera L; University of North Carolina, Chapel Hill, NC, USA.
  • McIntosh S; University of Rochester Medical Center-CLIC, Rochester, NY, USA.
  • Attia J; University of Rochester Medical Center-CLIC, Rochester, NY, USA.
  • Adams B; Oregon Health & Science University, Portland, OR, USA.
  • Swartz K; Oregon Health & Science University, Portland, OR, USA.
  • Brown A; University of North Carolina, Chapel Hill, NC, USA.
  • Barton-Burke M; Memorial Sloan Kettering Cancer Center, New York, NY, USA.
J Clin Transl Sci ; 8(1): e91, 2024.
Article en En | MEDLINE | ID: mdl-38836248
ABSTRACT

Objective:

Research study complexity refers to variables that contribute to the difficulty of a clinical trial or study. This includes variables such as intervention type, design, sample, and data management. High complexity often requires more resources, advanced planning, and specialized expertise to execute studies effectively. However, there are limited instruments that scale study complexity across research designs. The purpose of this study was to develop and establish initial psychometric properties of an instrument that scales research study complexity.

Methods:

Technical and grammatical principles were followed to produce clear, concise items using language familiar to researchers. Items underwent face, content, and cognitive validity testing through quantitative surveys and qualitative interviews. Content validity indices were calculated, and iterative scale revision was performed. The instrument underwent pilot testing using 2 exemplar protocols, asking participants (n = 31) to score 25 items (e.g., study arms, data collection procedures).

Results:

The instrument (Research Complexity Index) demonstrated face, content, and cognitive validity. Item mean and standard deviation ranged from 1.0 to 2.75 (Protocol 1) and 1.31 to 2.86 (Protocol 2). Corrected item-total correlations ranged from .030 to .618. Eight elements appear to be under correlated to other elements. Cronbach's alpha was 0.586 (Protocol 1) and 0.764 (Protocol 2). Inter-rater reliability was fair (kappa = 0.338).

Conclusion:

Initial pilot testing demonstrates face, content, and cognitive validity, moderate internal consistency reliability and fair inter-rater reliability. Further refinement of the instrument may increase reliability thus providing a comprehensive method to assess study complexity and related resource quantification (e.g., staffing requirements).
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Clin Transl Sci Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: J Clin Transl Sci Año: 2024 Tipo del documento: Article País de afiliación: Estados Unidos