Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 124
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Pharmacol Res ; 199: 107015, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38036197

RESUMEN

Existing reporting checklists lack the necessary level of detail and comprehensiveness to be used in guidelines on Chinese patent medicines (CPM). This study aims to develop a reporting guidance for CPM guidelines based on the Reporting Items of Practice Guidelines in Healthcare (RIGHT) statement. We extracted information from CPM guidelines, existing reporting standards for traditional Chinese medicine (TCM), and the RIGHT statement and its extensions to form the initial pool of reporting items for CPM guidelines. Seventeen experts from diverse disciplines participated in two rounds of Delphi process to refine and clarify the items. Finally, 18 authoritative consultants in the field of TCM and reporting guidelines reviewed and approved the RIGHT for CPM checklist. We added 16 new items and modified two items of the original RIGHT statement to form the RIGHT for CPM checklist, which contains 51 items grouped into seven sections and 23 topics. The new and revised items are distributed across four sections (Basic information, Background, Evidence, and Recommendations) and seven topics: title/subtitle (one new and one revised item), Registration information (one new item), Brief description of the health problem (four new items), Guideline development groups (one revised item), Health care questions (two new items), Recommendations (two new items), and Rationale/explanation for recommendations (six new items). The RIGHT for CPM checklist is committed to providing users with guidance for detailed, comprehensive and transparent reporting, and help practitioners better understand and implement CPM guidelines.


Asunto(s)
Lista de Verificación , Medicina Tradicional China
2.
Value Health ; 27(9): 1196-1205, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38795956

RESUMEN

OBJECTIVES: Economic evaluations (EEs) are commonly used by decision makers to understand the value of health interventions. The Consolidated Health Economic Evaluation Reporting Standards (CHEERS 2022) provide reporting guidelines for EEs. Healthcare systems will increasingly see new interventions that use artificial intelligence (AI) to perform their function. We developed Consolidated Health Economic Evaluation Reporting Standards for Interventions that use AI (CHEERS-AI) to ensure EEs of AI-based health interventions are reported in a transparent and reproducible manner. METHODS: Potential CHEERS-AI reporting items were informed by 2 published systematic literature reviews of EEs and a contemporary update. A Delphi study was conducted using 3 survey rounds to elicit multidisciplinary expert views on 26 potential items, through a 9-point Likert rating scale and qualitative comments. An online consensus meeting was held to finalize outstanding reporting items. A digital health patient group reviewed the final checklist from a patient perspective. RESULTS: A total of 58 participants responded to survey round 1, 42, and 31 of whom responded to rounds 2 and 3, respectively. Nine participants joined the consensus meeting. Ultimately, 38 reporting items were included in CHEERS-AI. They comprised the 28 original CHEERS 2022 items, plus 10 new AI-specific reporting items. Additionally, 8 of the original CHEERS 2022 items were elaborated on to ensure AI-specific nuance is reported. CONCLUSIONS: CHEERS-AI should be used when reporting an EE of an intervention that uses AI to perform its function. CHEERS-AI will help decision makers and reviewers to understand important AI-specific details of an intervention, and any implications for the EE methods used and cost-effectiveness conclusions.


Asunto(s)
Inteligencia Artificial , Técnica Delphi , Inteligencia Artificial/economía , Humanos , Análisis Costo-Beneficio/métodos , Lista de Verificación , Consenso , Encuestas y Cuestionarios , Economía Médica
3.
Clin Chem Lab Med ; 2024 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-38965828

RESUMEN

There is a need for standards for generation and reporting of Biological Variation (BV) reference data. The absence of standards affects the quality and transportability of BV data, compromising important clinical applications. To address this issue, international expert groups under the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) have developed an online resource (https://tinyurl.com/bvmindmap) in the form of an interactive mind map that serves as a guideline for researchers planning, performing and reporting BV studies. The mind map addresses study design, data analysis, and reporting criteria, providing embedded links to relevant references and resources. It also incorporates a checklist approach, identifying a Minimum Data Set (MDS) to enable the transportability of BV data and incorporates the Biological Variation Data Critical Appraisal Checklist (BIVAC) to assess study quality. The mind map is open to access and is disseminated through the EFLM BV Database website, promoting accessibility and compliance to a reporting standard, thereby providing a tool to be used to ensure data quality, consistency, and comparability of BV data. Thus, comparable to the STARD initiative for diagnostic accuracy studies, the mind map introduces a Standard for Reporting Biological Variation Data Studies (STARBIV), which can enhance the reporting quality of BV studies, foster user confidence, provide better decision support, and be used as a tool for critical appraisal. Ongoing refinement is expected to adapt to emerging methodologies, ensuring a positive trajectory toward improving the validity and applicability of BV data in clinical practice.

4.
Health Qual Life Outcomes ; 22(1): 48, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38978063

RESUMEN

PURPOSE: Although comprehensive and widespread guidelines on how to conduct systematic reviews of outcome measurement instruments (OMIs) exist, for example from the COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) initiative, key information is often missing in published reports. This article describes the development of an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guideline: PRISMA-COSMIN for OMIs 2024. METHODS: The development process followed the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guidelines and included a literature search, expert consultations, a Delphi study, a hybrid workgroup meeting, pilot testing, and an end-of-project meeting, with integrated patient/public involvement. RESULTS: From the literature and expert consultation, 49 potentially relevant reporting items were identified. Round 1 of the Delphi study was completed by 103 panelists, whereas round 2 and 3 were completed by 78 panelists. After 3 rounds, agreement (≥ 67%) on inclusion and wording was reached for 44 items. Eleven items without consensus for inclusion and/or wording were discussed at a workgroup meeting attended by 24 participants. Agreement was reached for the inclusion and wording of 10 items, and the deletion of 1 item. Pilot testing with 65 authors of OMI systematic reviews further improved the guideline through minor changes in wording and structure, finalized during the end-of-project meeting. The final checklist to facilitate the reporting of full systematic review reports contains 54 (sub)items addressing the review's title, abstract, plain language summary, open science, introduction, methods, results, and discussion. Thirteen items pertaining to the title and abstract are also included in a separate abstract checklist, guiding authors in reporting for example conference abstracts. CONCLUSION: PRISMA-COSMIN for OMIs 2024 consists of two checklists (full reports; abstracts), their corresponding explanation and elaboration documents detailing the rationale and examples for each item, and a data flow diagram. PRISMA-COSMIN for OMIs 2024 can improve the reporting of systematic reviews of OMIs, fostering their reproducibility and allowing end-users to appraise the quality of OMIs and select the most appropriate OMI for a specific application. NOTE: In order to encourage its wide dissemination this article is freely accessible on the web sites of the journals: Health and Quality of Life Outcomes; Journal of Clinical Epidemiology; Journal of Patient-Reported Outcomes; Quality of Life Research.


Asunto(s)
Técnica Delphi , Evaluación de Resultado en la Atención de Salud , Revisiones Sistemáticas como Asunto , Humanos , Guías como Asunto , Lista de Verificación , Proyectos de Investigación/normas , Consenso
5.
Qual Life Res ; 33(8): 2029-2046, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38980635

RESUMEN

PURPOSE: Although comprehensive and widespread guidelines on how to conduct systematic reviews of outcome measurement instruments (OMIs) exist, for example from the COSMIN (COnsensus-based Standards for the selection of health Measurement INstruments) initiative, key information is often missing in published reports. This article describes the development of an extension of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guideline: PRISMA-COSMIN for OMIs 2024. METHODS: The development process followed the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) guidelines and included a literature search, expert consultations, a Delphi study, a hybrid workgroup meeting, pilot testing, and an end-of-project meeting, with integrated patient/public involvement. RESULTS: From the literature and expert consultation, 49 potentially relevant reporting items were identified. Round 1 of the Delphi study was completed by 103 panelists, whereas round 2 and 3 were completed by 78 panelists. After 3 rounds, agreement (≥ 67%) on inclusion and wording was reached for 44 items. Eleven items without consensus for inclusion and/or wording were discussed at a workgroup meeting attended by 24 participants. Agreement was reached for the inclusion and wording of 10 items, and the deletion of 1 item. Pilot testing with 65 authors of OMI systematic reviews further improved the guideline through minor changes in wording and structure, finalized during the end-of-project meeting. The final checklist to facilitate the reporting of full systematic review reports contains 54 (sub)items addressing the review's title, abstract, plain language summary, open science, introduction, methods, results, and discussion. Thirteen items pertaining to the title and abstract are also included in a separate abstract checklist, guiding authors in reporting for example conference abstracts. CONCLUSION: PRISMA-COSMIN for OMIs 2024 consists of two checklists (full reports; abstracts), their corresponding explanation and elaboration documents detailing the rationale and examples for each item, and a data flow diagram. PRISMA-COSMIN for OMIs 2024 can improve the reporting of systematic reviews of OMIs, fostering their reproducibility and allowing end-users to appraise the quality of OMIs and select the most appropriate OMI for a specific application. NOTE: In order to encourage its wide dissemination this article is freely accessible on the web sites of the journals: Health and Quality of Life Outcomes; Journal of Clinical Epidemiology; Journal of Patient-Reported Outcomes; Quality of Life Research.


Asunto(s)
Técnica Delphi , Evaluación de Resultado en la Atención de Salud , Revisiones Sistemáticas como Asunto , Humanos , Guías como Asunto , Proyectos de Investigación/normas , Lista de Verificación
6.
Neurosurg Rev ; 47(1): 174, 2024 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-38643293

RESUMEN

Brain Arteriovenous Malformations (bAVMs) are rare but high-risk developmental anomalies of the vascular system. Microsurgery through craniotomy is believed to be the mainstay standard treatment for many grades of bAVMs. However, a significant challenge emerges in the existing body of clinical studies on open surgery for bAVMs: the lack of reproducibility and comparability. This study aims to assess the quality of studies reporting clinical and surgical outcomes for bAVMs treated by open surgery and develop a reporting guideline checklist focusing on essential elements to ensure comparability and reproducibility. This is a systematic literature review that followed the PRISMA guidelines with the search in Medline, Embase, and Web of Science databases, for studies published between January 1, 2018, and December 1, 2023. Included studies were scrutinized focusing on seven domains: (1) Assessment of How Studies Reported on the Baseline Characteristics of the Patient Sample; (2) Assessment and reporting on bAVMs grading, anatomical characteristics, and radiological aspects; (3) Angioarchitecture Assessment and Reporting; (4) Reporting on Pivotal Concepts Definitions; (5) Reporting on Neurosurgeon(s) and Staff Characteristics; (6) Reporting on Surgical Details; (7) Assessing and Reporting Clinical and Surgical Outcomes and AEs. A total of 47 studies comprising 5,884 patients were included. The scrutiny of the studies identified that the current literature in bAVM open surgery is deficient in many aspects, ranging from fundamental pieces of information of methodology to baseline characteristics of included patients and data reporting. Included studies demonstrated a lack of reproducibility that hinders building cumulative evidence. A bAVM Open Surgery Reporting Guideline with 65 items distributed across eight domains was developed and is proposed in this study aiming to address these shortcomings. This systematic review identified that the available literature regarding microsurgery for bAVM treatment, particularly in studies reporting clinical and surgical outcomes, lacks rigorous scientific methodology and quality in reporting. The proposed bAVM Open Surgery Reporting Guideline covers all essential aspects and is a potential solution to address these shortcomings and increase transparency, comparability, and reproducibility in this scenario. This proposal aims to advance the level of evidence and enhance knowledge regarding the Open Surgery treatment for bAVMs.


Asunto(s)
Malformaciones Arteriovenosas Intracraneales , Humanos , Malformaciones Arteriovenosas Intracraneales/cirugía , Reproducibilidad de los Resultados , Resultado del Tratamiento , Procedimientos Neuroquirúrgicos/métodos , Microcirugia/métodos
7.
Value Health ; 26(10): 1461-1473, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37414276

RESUMEN

OBJECTIVES: Although the ISPOR Value of Information (VOI) Task Force's reports outline VOI concepts and provide good-practice recommendations, there is no guidance for reporting VOI analyses. VOI analyses are usually performed alongside economic evaluations for which the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) 2022 Statement provides reporting guidelines. Thus, we developed the CHEERS-VOI checklist to provide reporting guidance and checklist to support the transparent, reproducible, and high-quality reporting of VOI analyses. METHODS: A comprehensive literature review generated a list of 26 candidate reporting items. These candidate items underwent a Delphi procedure with Delphi participants through 3 survey rounds. Participants rated each item on a 9-point Likert scale to indicate its relevance when reporting the minimal, essential information about VOI methods and provided comments. The Delphi results were reviewed at 2-day consensus meetings and the checklist was finalized using anonymous voting. RESULTS: We had 30, 25, and 24 Delphi respondents in rounds 1, 2, and 3, respectively. After incorporating revisions recommended by the Delphi participants, all 26 candidate items proceeded to the 2-day consensus meetings. The final CHEERS-VOI checklist includes all CHEERS items, but 7 items require elaboration when reporting VOI. Further, 6 new items were added to report information relevant only to VOI (eg, VOI methods applied). CONCLUSIONS: The CHEERS-VOI checklist should be used when a VOI analysis is performed alongside economic evaluations. The CHEERS-VOI checklist will help decision makers, analysts and peer reviewers in the assessment and interpretation of VOI analyses and thereby increase transparency and rigor in decision making.


Asunto(s)
Lista de Verificación , Informe de Investigación , Humanos , Análisis Costo-Beneficio , Estándares de Referencia , Consenso
8.
BMC Med Res Methodol ; 23(1): 20, 2023 01 21.
Artículo en Inglés | MEDLINE | ID: mdl-36670375

RESUMEN

BACKGROUND: Reporting quality is a critical issue in health sciences. Adopting the reporting guidelines has been approved to be an effective way of enhancing the reporting quality and transparency of clinical research. In 2012, we found that only 7 (7/1221, 0.6%) journals adopted the Consolidated Standards of Reporting Trials (CONSORT) statement in China. The aim of the study was to know the implementation status of CONSORT and other reporting guidelines about clinical studies in China. METHODS: A cross-sectional bibliometric study was conducted. Eight medical databases were systematically searched, and 1039 medical journals published in mainland China, Hong Kong, Macau, and Taiwan were included. The basic characteristics, including subject, language, publication place, journal-indexed databases, and journal impact factors were extracted. The endorsement of reporting guidelines was assessed by a modified 5-level evaluation tool, namely i) positive active, ii) positive weak, iii) passive moderate, iv) passive weak and v) none. RESULTS: Among included journals, 24.1% endorsed CONSORT, and 0.8% endorsed CONSORT extensions. For STROBE (STrengthening the Reporting of Observational Studies in Epidemiology), PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses), STARD (An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies), CARE (CAse REport guidelines), the endorsement proportion were 17.2, 16.6, 16.4, and 14.8% respectively. The endorsement proportion for SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials), TRIPOD (Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis), AGREE (Appraisal of Guidelines, Research, and Evaluation), and RIGHT (Reporting Items for Practice Guidelines in Healthcare) were below 0.7%. CONCLUSIONS: Our results showed that the implementation of reporting guidelines was low. We suggest the following initiatives including i) enhancing the level of journal endorsement for reporting guidelines; ii) strengthening the collaboration among authors, reviewers, editors, and other stakeholders; iii) providing training courses for stakeholders; iv) establishing bases for reporting guidelines network in China; v) adopting the endorsement of reporting guidelines in the policies of the China Periodicals Association (CPA); vi) promoting Chinese medical journals into the international evaluation system and publish in English.


Asunto(s)
Publicaciones Periódicas como Asunto , China , Estudios Transversales , Estándares de Referencia
9.
J Med Internet Res ; 25: e46694, 2023 05 10.
Artículo en Inglés | MEDLINE | ID: mdl-37163336

RESUMEN

BACKGROUND: Implementation of digital health technologies has grown rapidly, but many remain limited to pilot studies due to challenges, such as a lack of evidence or barriers to implementation. Overcoming these challenges requires learning from previous implementations and systematically documenting implementation processes to better understand the real-world impact of a technology and identify effective strategies for future implementation. OBJECTIVE: A group of global experts, facilitated by the Geneva Digital Health Hub, developed the Guidelines and Checklist for the Reporting on Digital Health Implementations (iCHECK-DH, pronounced "I checked") to improve the completeness of reporting on digital health implementations. METHODS: A guideline development group was convened to define key considerations and criteria for reporting on digital health implementations. To ensure the practicality and effectiveness of the checklist, it was pilot-tested by applying it to several real-world digital health implementations, and adjustments were made based on the feedback received. The guiding principle for the development of iCHECK-DH was to identify the minimum set of information needed to comprehensively define a digital health implementation, to support the identification of key factors for success and failure, and to enable others to replicate it in different settings. RESULTS: The result was a 20-item checklist with detailed explanations and examples in this paper. The authors anticipate that widespread adoption will standardize the quality of reporting and, indirectly, improve implementation standards and best practices. CONCLUSIONS: Guidelines for reporting on digital health implementations are important to ensure the accuracy, completeness, and consistency of reported information. This allows for meaningful comparison and evaluation of results, transparency, and accountability and informs stakeholder decision-making. i-CHECK-DH facilitates standardization of the way information is collected and reported, improving systematic documentation and knowledge transfer that can lead to the development of more effective digital health interventions and better health outcomes.


Asunto(s)
Lista de Verificación , Gestión del Conocimiento , Telemedicina , Humanos , Proyectos de Investigación , Implementación de Plan de Salud , Ciencia de la Implementación , Guías como Asunto
10.
BMC Med Res Methodol ; 22(1): 12, 2022 Jan 13.
Artículo en Inglés | MEDLINE | ID: mdl-35026997

RESUMEN

BACKGROUND: While many studies have consistently found incomplete reporting of regression-based prediction model studies, evidence is lacking for machine learning-based prediction model studies. We aim to systematically review the adherence of Machine Learning (ML)-based prediction model studies to the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Statement. METHODS: We included articles reporting on development or external validation of a multivariable prediction model (either diagnostic or prognostic) developed using supervised ML for individualized predictions across all medical fields. We searched PubMed from 1 January 2018 to 31 December 2019. Data extraction was performed using the 22-item checklist for reporting of prediction model studies ( www.TRIPOD-statement.org ). We measured the overall adherence per article and per TRIPOD item. RESULTS: Our search identified 24,814 articles, of which 152 articles were included: 94 (61.8%) prognostic and 58 (38.2%) diagnostic prediction model studies. Overall, articles adhered to a median of 38.7% (IQR 31.0-46.4%) of TRIPOD items. No article fully adhered to complete reporting of the abstract and very few reported the flow of participants (3.9%, 95% CI 1.8 to 8.3), appropriate title (4.6%, 95% CI 2.2 to 9.2), blinding of predictors (4.6%, 95% CI 2.2 to 9.2), model specification (5.2%, 95% CI 2.4 to 10.8), and model's predictive performance (5.9%, 95% CI 3.1 to 10.9). There was often complete reporting of source of data (98.0%, 95% CI 94.4 to 99.3) and interpretation of the results (94.7%, 95% CI 90.0 to 97.3). CONCLUSION: Similar to prediction model studies developed using conventional regression-based techniques, the completeness of reporting is poor. Essential information to decide to use the model (i.e. model specification and its performance) is rarely reported. However, some items and sub-items of TRIPOD might be less suitable for ML-based prediction model studies and thus, TRIPOD requires extensions. Overall, there is an urgent need to improve the reporting quality and usability of research to avoid research waste. SYSTEMATIC REVIEW REGISTRATION: PROSPERO, CRD42019161764.


Asunto(s)
Lista de Verificación , Modelos Estadísticos , Humanos , Aprendizaje Automático , Pronóstico , Aprendizaje Automático Supervisado
11.
Health Res Policy Syst ; 20(1): 82, 2022 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-35870939

RESUMEN

BACKGROUND: Evidence briefs for policy (EBP) draw on best-available data and research evidence (e.g., systematic reviews) to help clarify policy problems, frame options for addressing them, and identify implementation considerations for policymakers in a given context. An increasing number of governments, non-governmental organizations and research groups have been developing EBP on a wide variety of topics. However, the reporting characteristics of EBP vary across organizations due to a lack of internationally accepted standard reporting guidelines. This project aims to develop a STandard reporting guideline of Evidence briefs for Policy (STEP), which will encompass a reporting checklist and a STEP statement and a user manual. METHODS: We will refer to and adapt the methods recommended by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) network. The key actions include: (1) developing a protocol; (2) establishing an international multidisciplinary STEP working group (consisting of a Coordination Team and a Delphi Panel); (3) generating an initial draft of the potential items for the STEP reporting checklist through a comprehensive review of EBP-related literature and documents; (4) conducting a modified Delphi process to select and refine the reporting checklist; (5) using the STEP to evaluate published policy briefs in different countries; (6) finalizing the checklist; (7) developing the STEP statement and the user manual (8) translating the STEP into different languages; and (9) testing the reliability through real world use. DISCUSSION: Our protocol describes the development process for STEP. It will directly address what and how information should be reported in EBP and contribute to improving their quality. The decision-makers, researchers, journal editors, evaluators, and other stakeholders who support evidence-informed policymaking through the use of mechanisms like EBP will benefit from the STEP. Registration We registered the protocol on the EQUATOR network. ( https://www.equator-network.org/library/reporting-guidelines-under-development/#84 ).


Asunto(s)
Lista de Verificación , Informe de Investigación , Humanos , Políticas , Reproducibilidad de los Resultados , Literatura de Revisión como Asunto
12.
Res Nurs Health ; 44(2): 344-352, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33386768

RESUMEN

Unfinished, rationed, missed, or otherwise undone nursing care is a phenomenon observed across health-care settings worldwide. Irrespective of differing terminology, it has repeatedly been linked to adverse outcomes for both patients and nursing staff. With growing numbers of publications on the topic, scholars have acknowledged persistent barriers to meaningful comparison across studies, settings, and health-care systems. The aim of this study was thus to develop a guideline to strengthen transparent reporting in research on unfinished nursing care. An international four-person steering group led a consensus process including a two-round online Delphi survey and a workshop with 38 international experts. The study was embedded in the RANCARE COST Action: Rationing Missed Nursing Care: An international and multidimensional Problem. Participation was voluntary. The resulting 40-item RANCARE guideline provides recommendations for transparent and comprehensive reporting on unfinished nursing care regarding conceptualization, measurement, contextual information, and data analyses. By increasing the transparency and comprehensiveness in reporting of studies on unfinished nursing care, the RANCARE guideline supports efficient use of the research results, for example, allowing researchers and nurses to take purposeful actions, with the goal of improving the safety and quality of health-care services.


Asunto(s)
Investigación en Enfermería , Guías de Práctica Clínica como Asunto , Pautas de la Práctica en Enfermería , Humanos
13.
J Formos Med Assoc ; 120(1 Pt 3): 609-620, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32703697

RESUMEN

BACKGROUND/PURPOSE: Significant associations between suicide behaviors and inappropriate suicide reports in the media have been reported. The study aimed to examine the quality of long-term suicide reports under surveillance by the Taiwan Suicide Prevention Center (TSPC). METHODS: The TSPC conducted daily surveillance with timely feedback and interactive approaches with the media professionals. To examine the reporting trends under the surveillance, daily adherence to the 12-item World Health Organization (WHO) guidelines was analyzed for print media published between 2010 and 2018, and for online media in 2017 and 2018. The trend analysis using the Cochran-Armitage test was performed to estimate the significance of adherence changes. RESULTS: In total, 5529 print reports and 16,445 person-event items from online media were reviewed. The number of suicide reports in print newspapers markedly decreased while it increased for online media. Surveillance of print media showed statistical significance of the improvement of reporting adherence to all guideline items except one item ("Do not publish photo or suicide notes"). Adherence rates were high (∼90%) for 6 of the 12 items over the study duration. Marked improvement was observed in three items: reporting details, giving simplistic reasons, and providing helpline resources. However, both "Highlight the alternatives to suicide" and "Work closely with health authorities to present the facts" items had the lowest adherence. Online media had similar findings and adherence profile as the print media. CONCLUSION: The quality of suicide reports significantly improved in most WHO guideline items. Development of psychiatric-media liaisons may be beneficial for further improvement.


Asunto(s)
Prevención del Suicidio , Adhesión a Directriz , Humanos , Medios de Comunicación de Masas , Taiwán/epidemiología , Organización Mundial de la Salud
14.
BMC Med Res Methodol ; 20(1): 19, 2020 02 03.
Artículo en Inglés | MEDLINE | ID: mdl-32013883

RESUMEN

BACKGROUND: There are a growing number of studies using mediation analysis to understand the mechanisms of health interventions and exposures. Recent work has shown that the reporting of these studies is heterogenous and incomplete. This problem stifles clinical application, reproducibility, and evidence synthesis. This paper describes the processes and methods that will be used to develop a guideline for reporting studies of mediation analyses (AGReMA). METHODS/DESIGN: AGReMA will be developed over five overlapping stages. Stage one will comprise a systematic review to examine relevant evidence on the quality of reporting in published studies that use mediation analysis. In the second stage we will consult a group of methodologists and applied researchers by using a Delphi process to identify items that should be considered for inclusion in AGReMA. The third stage will involve a consensus meeting to consolidate and prioritise key items to be included in AGReMA. The fourth stage will involve the production of AGReMA and an accompanying explanation and elaboration document. In the final stage we will disseminate the AGReMA statement via journals, conferences, and professional meetings across multiple disciplines. DISCUSSION: The development and implementation of AGReMA will improve the standardization, transparency, and completeness in the reporting of studies that use mediation analysis to understand the mechanisms of health interventions and exposures.


Asunto(s)
Intervención Médica Temprana/métodos , Guías como Asunto , Análisis de Mediación , Consenso , Técnica Delphi , Exposición a Riesgos Ambientales/efectos adversos , Humanos , Proyectos de Investigación , Revisiones Sistemáticas como Asunto
15.
Oral Dis ; 26(3): 696-706, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-31845484

RESUMEN

OBJECTIVES: To study whether specific recommendations aimed at reducing avoidable research waste were included in the author instructions of leading dental journals. METHOD: We identified 109 peer-reviewed and original research-oriented dental journals that were indexed in the MEDLINE and/or SCIE database in 2018. Two authors extracted independently information regarding the endorsement of reporting guidelines (RGs), ICMJE recommendations, trial or systematic review registration, and open access (OA) and data sharing policies. RESULT: All 109 journals provided online "instructions to authors," among which 64 journals (58.7%) mentioned RGs. The ICMJE recommendations were endorsed by 74 journals (67.9%), and trial and systematic review registration were mentioned by 48 (44.0%) and 6 (5.5%) of the journals, respectively. In terms of OA, most journals stated they were hybrid OA (82.0%) and direct OA (15.4%), while data sharing policy was recommended by 32 (29.4%) journals. Statistical analyses suggest that these policies were more frequently mentioned by SCIE-indexed journals, higher-impact journals, and journals that endorsed the ICMJE recommendations. CONCLUSION: Reporting guidelines, OA and data sharing are important tools for enhancing research communication, translation and the reduction of avoidable research waste. However, currently they are not widely endorsed by dental journals.


Asunto(s)
Odontología , Políticas Editoriales , Publicaciones Periódicas como Asunto , Proyectos de Investigación/normas , Difusión de la Información , Publicación de Acceso Abierto
16.
BMC Med Inform Decis Mak ; 20(Suppl 4): 292, 2020 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-33317497

RESUMEN

BACKGROUND: To reduce cancer mortality and improve cancer outcomes, it is critical to understand the various cancer risk factors (RFs) across different domains (e.g., genetic, environmental, and behavioral risk factors) and levels (e.g., individual, interpersonal, and community levels). However, prior research on RFs of cancer outcomes, has primarily focused on individual level RFs due to the lack of integrated datasets that contain multi-level, multi-domain RFs. Further, the lack of a consensus and proper guidance on systematically identify RFs also increase the difficulty of RF selection from heterogenous data sources in a multi-level integrative data analysis (mIDA) study. More importantly, as mIDA studies require integrating heterogenous data sources, the data integration processes in the limited number of existing mIDA studies are inconsistently performed and poorly documented, and thus threatening transparency and reproducibility. METHODS: Informed by the National Institute on Minority Health and Health Disparities (NIMHD) research framework, we (1) reviewed existing reporting guidelines from the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network and (2) developed a theory-driven reporting guideline to guide the RF variable selection, data source selection, and data integration process. Then, we developed an ontology to standardize the documentation of the RF selection and data integration process in mIDA studies. RESULTS: We summarized the review results and created a reporting guideline-ATTEST-for reporting the variable selection and data source selection and integration process. We provided an ATTEST check list to help researchers to annotate and clearly document each step of their mIDA studies to ensure the transparency and reproducibility. We used the ATTEST to report two mIDA case studies and further transformed annotation results into sematic triples, so that the relationships among variables, data sources and integration processes are explicitly standardized and modeled using the classes and properties from OD-ATTEST. CONCLUSION: Our ontology-based reporting guideline solves some key challenges in current mIDA studies for cancer outcomes research, through providing (1) a theory-driven guidance for multi-level and multi-domain RF variable and data source selection; and (2) a standardized documentation of the data selection and integration processes powered by an ontology, thus a way to enable sharing of mIDA study reports among researchers.


Asunto(s)
Neoplasias , Documentación , Humanos , Almacenamiento y Recuperación de la Información , Neoplasias/genética , Evaluación de Resultado en la Atención de Salud , Reproducibilidad de los Resultados
17.
BMC Med ; 16(1): 210, 2018 11 16.
Artículo en Inglés | MEDLINE | ID: mdl-30442137

RESUMEN

BACKGROUND: Adequate reporting of adaptive designs (ADs) maximises their potential benefits in the conduct of clinical trials. Transparent reporting can help address some obstacles and concerns relating to the use of ADs. Currently, there are deficiencies in the reporting of AD trials. To overcome this, we have developed a consensus-driven extension to the CONSORT statement for randomised trials using an AD. This paper describes the processes and methods used to develop this extension rather than detailed explanation of the guideline. METHODS: We developed the guideline in seven overlapping stages: 1) Building on prior research to inform the need for a guideline; 2) A scoping literature review to inform future stages; 3) Drafting the first checklist version involving an External Expert Panel; 4) A two-round Delphi process involving international, multidisciplinary, and cross-sector key stakeholders; 5) A consensus meeting to advise which reporting items to retain through voting, and to discuss the structure of what to include in the supporting explanation and elaboration (E&E) document; 6) Refining and finalising the checklist; and 7) Writing-up and dissemination of the E&E document. The CONSORT Executive Group oversaw the entire development process. RESULTS: Delphi survey response rates were 94/143 (66%), 114/156 (73%), and 79/143 (55%) in rounds 1, 2, and across both rounds, respectively. Twenty-seven delegates from Europe, the USA, and Asia attended the consensus meeting. The main checklist has seven new and nine modified items and six unchanged items with expanded E&E text to clarify further considerations for ADs. The abstract checklist has one new and one modified item together with an unchanged item with expanded E&E text. The E&E document will describe the scope of the guideline, the definition of an AD, and some types of ADs and trial adaptations and explain each reporting item in detail including case studies. CONCLUSIONS: We hope that making the development processes, methods, and all supporting information that aided decision-making transparent will enhance the acceptability and quick uptake of the guideline. This will also help other groups when developing similar CONSORT extensions. The guideline is applicable to all randomised trials with an AD and contains minimum reporting requirements.


Asunto(s)
Ensayos Clínicos Controlados Aleatorios como Asunto/normas , Proyectos de Investigación/normas , Asia , Lista de Verificación , Consenso , Técnicas de Apoyo para la Decisión , Europa (Continente) , Humanos
18.
BMC Med ; 16(1): 120, 2018 07 19.
Artículo en Inglés | MEDLINE | ID: mdl-30021577

RESUMEN

BACKGROUND: As complete reporting is essential to judge the validity and applicability of multivariable prediction models, a guideline for the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) was introduced. We assessed the completeness of reporting of prediction model studies published just before the introduction of the TRIPOD statement, to refine and tailor its implementation strategy. METHODS: Within each of 37 clinical domains, 10 journals with the highest journal impact factor were selected. A PubMed search was performed to identify prediction model studies published before the launch of TRIPOD in these journals (May 2014). Eligible publications reported on the development or external validation of a multivariable prediction model (either diagnostic or prognostic) or on the incremental value of adding a predictor to an existing model. RESULTS: We included 146 publications (84% prognostic), from which we assessed 170 models: 73 (43%) on model development, 43 (25%) on external validation, 33 (19%) on incremental value, and 21 (12%) on combined development and external validation of the same model. Overall, publications adhered to a median of 44% (25th-75th percentile 35-52%) of TRIPOD items, with 44% (35-53%) for prognostic and 41% (34-48%) for diagnostic models. TRIPOD items that were completely reported for less than 25% of the models concerned abstract (2%), title (5%), blinding of predictor assessment (6%), comparison of development and validation data (11%), model updating (14%), model performance (14%), model specification (17%), characteristics of participants (21%), model performance measures (methods) (21%), and model-building procedures (24%). Most often reported were TRIPOD items regarding overall interpretation (96%), source of data (95%), and risk groups (90%). CONCLUSIONS: More than half of the items considered essential for transparent reporting were not fully addressed in publications of multivariable prediction model studies. Essential information for using a model in individual risk prediction, i.e. model specifications and model performance, was incomplete for more than 80% of the models. Items that require improved reporting are title, abstract, and model-building procedures, as they are crucial for identification and external validation of prediction models.


Asunto(s)
Proyectos de Investigación , Humanos , Pronóstico
19.
BMC Pediatr ; 17(1): 57, 2017 03 06.
Artículo en Inglés | MEDLINE | ID: mdl-28260530

RESUMEN

BACKGROUND: Systematic reviews are key tools to enable decision making by healthcare providers and policymakers. Despite the availability of the evidence based Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA-2009 and PRISMA-P 2015) statements that were developed to improve the transparency and quality of reporting of systematic reviews, uncertainty on how to deal with pediatric-specific methodological challenges of systematic reviews impairs decision-making in child health. In this paper, we identify methodological challenges specific to the design, conduct and reporting of pediatric systematic reviews, and propose a process to address these challenges. DISCUSSION: One fundamental decision at the outset of a systematic review is whether to focus on a pediatric population only, or to include both adult and pediatric populations. Both from the policy and patient care point of view, the appropriateness of interventions and comparators administered to pre-defined pediatric age subgroup is critical. Decisions need to be based on the biological plausibility of differences in treatment effects across the developmental trajectory in children. Synthesis of evidence from different trials is often impaired by the use of outcomes and measurement instruments that differ between trials and are neither relevant nor validated in the pediatric population. Other issues specific to pediatric systematic reviews include lack of pediatric-sensitive search strategies and inconsistent choices of pediatric age subgroups in meta-analyses. In addition to these methodological issues generic to all pediatric systematic reviews, special considerations are required for reviews of health care interventions' safety and efficacy in neonatology, global health, comparative effectiveness interventions and individual participant data meta-analyses. To date, there is no standard approach available to overcome this problem. We propose to develop a consensus-based checklist of essential items which researchers should consider when they are planning (PRISMA-PC-Protocol for Children) or reporting (PRISMA-C-reporting for Children) a pediatric systematic review. Available guidelines including PRISMA do not cover the complexity associated with the conduct and reporting of systematic reviews in the pediatric population; they require additional and modified standards for reporting items. Such guidance will facilitate the translation of knowledge from the literature to bedside care and policy, thereby enhancing delivery of care and improving child health outcomes.


Asunto(s)
Pediatría , Proyectos de Investigación , Literatura de Revisión como Asunto , Adolescente , Niño , Preescolar , Guías como Asunto , Humanos , Lactante , Recién Nacido , Proyectos de Investigación/normas
20.
Clin Anat ; 30(1): 14-20, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27801507

RESUMEN

The rise of evidence-based anatomy has emphasized the need for original anatomical studies with high clarity, transparency, and comprehensiveness in reporting. Currently, inconsistencies in the quality and reporting of such studies have placed limits on accurate reliability and impact assessment. Our aim was to develop a checklist of reporting items that should be addressed by authors of original anatomical studies. The study steering committee formulated a preliminary conceptual design and began to generate items on the basis of a literature review and expert opinion. This led to the development of a preliminary checklist. The validity of this checklist was assessed by a Delphi procedure, and feedback from the Delphi panelists, who were experts in the area of anatomical research, was used to improve it. The Delphi procedure involved 12 experts in anatomical research. It comprised two rounds, after which unanimous consensus was reached regarding the items to be included in the checklist. The steering committee agreed to name the checklist AQUA. The preliminary AQUA Checklist consisted of 26 items divided into eight sections. Following round 1, some of the items underwent major revision and three new ones were introduced. The checklist was revised only for minor language inaccuracies after round 2. The final version of the AQUA Checklist consisted of the initial eight sections with a total of 29 items. The steering committee hopes the AQUA Checklist will improve the quality and reporting of anatomical studies. Clin. Anat. 30:14-20, 2017. © 2016 Wiley Periodicals, Inc.


Asunto(s)
Anatomía/normas , Lista de Verificación , Técnica Delphi
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA