Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Chin J Traumatol ; 25(6): 312-316, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35840469

RESUMO

Transparency Ecosystem for Research and Journals in Medicine (TERM) working group summarized the essential recommendations that should be considered to review and publish a high-quality guideline. These recommendations from editors and reviewers included 10 components of essential requirements: systematic review of existing relevant guidelines, guideline registration, guideline protocol, stakeholders, conflicts of interest, clinical questions, systematic reviews, recommendation consensus, guideline reporting and external review. TERM working group abbreviates them as PAGE (essential requirements for Publishing clinical prActice GuidelinEs), and recommends guideline authors, editors, and peer reviewers to use them for high-quality guidelines.


Assuntos
Guias de Prática Clínica como Assunto , Humanos
2.
Chin Med J (Engl) ; 136(12): 1430-1438, 2023 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-37192012

RESUMO

BACKGROUND: This study aimed to develop a comprehensive instrument for evaluating and ranking clinical practice guidelines, named Scientific, Transparent and Applicable Rankings tool (STAR), and test its reliability, validity, and usability. METHODS: This study set up a multidisciplinary working group including guideline methodologists, statisticians, journal editors, clinicians, and other experts. Scoping review, Delphi methods, and hierarchical analysis were used to develop the STAR tool. We evaluated the instrument's intrinsic and interrater reliability, content and criterion validity, and usability. RESULTS: STAR contained 39 items grouped into 11 domains. The mean intrinsic reliability of the domains, indicated by Cronbach's α coefficient, was 0.588 (95% confidence interval [CI]: 0.414, 0.762). Interrater reliability as assessed with Cohen's kappa coefficient was 0.774 (95% CI: 0.740, 0.807) for methodological evaluators and 0.618 (95% CI: 0.587, 0.648) for clinical evaluators. The overall content validity index was 0.905. Pearson's r correlation for criterion validity was 0.885 (95% CI: 0.804, 0.932). The mean usability score of the items was 4.6 and the median time spent to evaluate each guideline was 20 min. CONCLUSION: The instrument performed well in terms of reliability, validity, and efficiency, and can be used for comprehensively evaluating and ranking guidelines.


Assuntos
Reprodutibilidade dos Testes , Inquéritos e Questionários , Guias de Prática Clínica como Assunto , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA