Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 9.576
Filter
Add more filters

Publication year range
1.
Nat Methods ; 21(2): 170-181, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37710020

ABSTRACT

Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However, for scientists wishing to publish obtained images and image-analysis results, there are currently no unified guidelines for best practices. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here, we present community-developed checklists for preparing light microscopy images and describing image analyses for publications. These checklists offer authors, readers and publishers key recommendations for image formatting and annotation, color selection, data availability and reporting image-analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby to heighten the quality and explanatory power of microscopy data.


Subject(s)
Checklist , Publishing , Reproducibility of Results , Image Processing, Computer-Assisted , Microscopy
2.
Brief Bioinform ; 24(5)2023 09 20.
Article in English | MEDLINE | ID: mdl-37529934

ABSTRACT

Adequate reporting is essential for evaluating the performance and clinical utility of a prognostic prediction model. Previous studies indicated a prevalence of incomplete or suboptimal reporting in translational and clinical studies involving development of multivariable prediction models for prognosis, which limited the potential applications of these models. While reporting templates introduced by the established guidelines provide an invaluable framework for reporting prognostic studies uniformly, there is a widespread lack of qualified adherence, which may be due to miscellaneous challenges in manual reporting of extensive model details, especially in the era of precision medicine. Here, we present ReProMSig (Reproducible Prognosis Molecular Signature), a web-based integrative platform providing the analysis framework for development, validation and application of a multivariable prediction model for cancer prognosis, using clinicopathological features and/or molecular profiles. ReProMSig platform supports transparent reporting by presenting both methodology details and analysis results in a strictly structured reporting file, following the guideline checklist with minimal manual input needed. The generated reporting file can be published together with a developed prediction model, to allow thorough interrogation and external validation, as well as online application for prospective cases. We demonstrated the utilities of ReProMSig by development of prognostic molecular signatures for stage II and III colorectal cancer respectively, in comparison with a published signature reproduced by ReProMSig. Together, ReProMSig provides an integrated framework for development, evaluation and application of prognostic/predictive biomarkers for cancer in a more transparent and reproducible way, which would be a useful resource for health care professionals and biomedical researchers.


Subject(s)
Checklist , Neoplasms , Humans , Precision Medicine , Neoplasms/diagnosis , Neoplasms/genetics , Neoplasms/therapy
3.
Ann Intern Med ; 177(6): 782-790, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38739919

ABSTRACT

BACKGROUND: Conflicts of interest (COIs) of contributors to a guideline project and the funding of that project can influence the development of the guideline. Comprehensive reporting of information on COIs and funding is essential for the transparency and credibility of guidelines. OBJECTIVE: To develop an extension of the Reporting Items for practice Guidelines in HealThcare (RIGHT) statement for the reporting of COIs and funding in policy documents of guideline organizations and in guidelines: the RIGHT-COI&F checklist. DESIGN: The recommendations of the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network were followed. The process consisted of registration of the project and setting up working groups, generation of the initial list of items, achieving consensus on the items, and formulating and testing the final checklist. SETTING: International collaboration. PARTICIPANTS: 44 experts. MEASUREMENTS: Consensus on checklist items. RESULTS: The checklist contains 27 items: 18 about the COIs of contributors and 9 about the funding of the guideline project. Of the 27 items, 16 are labeled as policy related because they address the reporting of COI and funding policies that apply across an organization's guideline projects. These items should be described ideally in the organization's policy documents, otherwise in the specific guideline. The remaining 11 items are labeled as implementation related and they address the reporting of COIs and funding of the specific guideline. LIMITATION: The RIGHT-COI&F checklist requires testing in real-life use. CONCLUSION: The RIGHT-COI&F checklist can be used to guide the reporting of COIs and funding in guideline development and to assess the completeness of reporting in published guidelines and policy documents. PRIMARY FUNDING SOURCE: The Fundamental Research Funds for the Central Universities of China.


Subject(s)
Checklist , Conflict of Interest , Practice Guidelines as Topic , Humans , Research Support as Topic/ethics , Disclosure
4.
Clin Infect Dis ; 78(2): 324-329, 2024 02 17.
Article in English | MEDLINE | ID: mdl-37739456

ABSTRACT

More than a decade after the Consolidated Standards of Reporting Trials group released a reporting items checklist for non-inferiority randomized controlled trials, the infectious diseases literature continues to underreport these items. Trialists, journals, and peer reviewers should redouble their efforts to ensure infectious diseases studies meet these minimum reporting standards.


Subject(s)
Checklist , Research Design , Humans , Reference Standards
5.
PLoS Med ; 21(1): e1004326, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38261576

ABSTRACT

BACKGROUND: In biomedical research, it is often desirable to seek consensus among individuals who have differing perspectives and experience. This is important when evidence is emerging, inconsistent, limited, or absent. Even when research evidence is abundant, clinical recommendations, policy decisions, and priority-setting may still require agreement from multiple, sometimes ideologically opposed parties. Despite their prominence and influence on key decisions, consensus methods are often poorly reported. Our aim was to develop the first reporting guideline dedicated to and applicable to all consensus methods used in biomedical research regardless of the objective of the consensus process, called ACCORD (ACcurate COnsensus Reporting Document). METHODS AND FINDINGS: We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines: a systematic review was followed by a Delphi process and meetings to finalize the ACCORD checklist. The preliminary checklist was drawn from the systematic review of existing literature on the quality of reporting of consensus methods and suggestions from the Steering Committee. A Delphi panel (n = 72) was recruited with representation from 6 continents and a broad range of experience, including clinical, research, policy, and patient perspectives. The 3 rounds of the Delphi process were completed by 58, 54, and 51 panelists. The preliminary checklist of 56 items was refined to a final checklist of 35 items relating to the article title (n = 1), introduction (n = 3), methods (n = 21), results (n = 5), discussion (n = 2), and other information (n = 3). CONCLUSIONS: The ACCORD checklist is the first reporting guideline applicable to all consensus-based studies. It will support authors in writing accurate, detailed manuscripts, thereby improving the completeness and transparency of reporting and providing readers with clarity regarding the methods used to reach agreement. Furthermore, the checklist will make the rigor of the consensus methods used to guide the recommendations clear for readers. Reporting consensus studies with greater clarity and transparency may enhance trust in the recommendations made by consensus panels.


Subject(s)
Biomedical Research , Consensus , Humans , Checklist , Policy , Trust
6.
PLoS Med ; 21(5): e1004390, 2024 May.
Article in English | MEDLINE | ID: mdl-38709851

ABSTRACT

BACKGROUND: When research evidence is limited, inconsistent, or absent, healthcare decisions and policies need to be based on consensus amongst interested stakeholders. In these processes, the knowledge, experience, and expertise of health professionals, researchers, policymakers, and the public are systematically collected and synthesised to reach agreed clinical recommendations and/or priorities. However, despite the influence of consensus exercises, the methods used to achieve agreement are often poorly reported. The ACCORD (ACcurate COnsensus Reporting Document) guideline was developed to help report any consensus methods used in biomedical research, regardless of the health field, techniques used, or application. This explanatory document facilitates the use of the ACCORD checklist. METHODS AND FINDINGS: This paper was built collaboratively based on classic and contemporary literature on consensus methods and publications reporting their use. For each ACCORD checklist item, this explanation and elaboration document unpacks the pieces of information that should be reported and provides a rationale on why it is essential to describe them in detail. Furthermore, this document offers a glossary of terms used in consensus exercises to clarify the meaning of common terms used across consensus methods, to promote uniformity, and to support understanding for consumers who read consensus statements, position statements, or clinical practice guidelines. The items are followed by examples of reporting items from the ACCORD guideline, in text, tables and figures. CONCLUSIONS: The ACCORD materials - including the reporting guideline and this explanation and elaboration document - can be used by anyone reporting a consensus exercise used in the context of health research. As a reporting guideline, ACCORD helps researchers to be transparent about the materials, resources (both human and financial), and procedures used in their investigations so readers can judge the trustworthiness and applicability of their results/recommendations.


Subject(s)
Checklist , Consensus , Humans , Biomedical Research/standards , Research Design/standards , Guidelines as Topic , Research Report/standards
7.
Ann Surg ; 280(2): 248-252, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38323468

ABSTRACT

OBJECTIVES: To assess the current quality of surgical outcome reporting in the medical literature and to provide recommendations for improvement. BACKGROUND: In 1996, The Lancet labeled surgery as a "comic opera" mostly referring to the poor quality of outcome reporting in the literature impeding improvement in surgical quality and patient care. METHODS: We screened 3 first-tier and 2 second-tier surgical journals, as well as 3 leading medical journals for original articles reporting on results of surgical procedures published over a recent 18-month period. The quality of outcome reporting was assessed using a prespecified 12-item checklist. RESULTS: Six hundred twenty-seven articles reporting surgical outcomes were analyzed, including 125 randomized controlled trials. Only 1 (0.2%) article met all 12 criteria of the checklist, whereas 356 articles (57%) fulfilled less than half of the criteria. The poorest reporting was on cumulative morbidity burden, which was missing in 94% of articles (n=591) as well as patient-reported outcomes missing in 83% of publications (n=518). Comparing journal groups for the individual criterion, we found moderate to very strong statistical evidence for better quality of reporting in high versus lower impact journals for 7 of 12 criteria and strong statistical evidence for better reporting of patient-reported outcomes in medical versus surgical journals ( P <0·001). CONCLUSIONS: The quality of outcomes reporting in the medical literature remains poor, lacking improvement over the past 20 years on most key end points. The implementation of standardized outcome reporting is urgently needed to minimize biased interpretation of data thereby enabling improved patient care and the elaboration of meaningful guidelines.


Subject(s)
Surgical Procedures, Operative , Humans , Surgical Procedures, Operative/standards , Periodicals as Topic , Outcome Assessment, Health Care , Checklist
8.
Int J Obes (Lond) ; 48(7): 901-912, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38459257

ABSTRACT

Nutrition-focused interventions are essential to optimize the bariatric care process and improve health and weight outcomes over time. Clear and detailed reporting of these interventions in research reports is crucial for understanding and applying the findings effectively in clinical practice and research replication. Given the importance of reporting transparency in research, this study aimed to use the Template for Intervention Description and Replication (TIDieR) checklist to evaluate the completeness of intervention reporting in nutritional weight management interventions adjunct to metabolic and bariatric surgery (MBS). The secondary aim was to examine the factors associated with better reporting. A literature search in PubMed, PsychINFO, EMBASE, Scopus, and the Cochrane Controlled Register of Trials was conducted to include randomized controlled trials (RCT), quasi-RCTs and parallel group trials. A total of 22 trials were included in the final analysis. Among the TIDieR 12 items, 6.6 ± 1.9 items were fully reported by all studies. None of the studies completely reported all intervention descriptors. The main areas where reporting required improvement were providing adequate details of the materials and procedures of the interventions, intervention personalization, and intervention modifications during the study. The quality of intervention reporting remained the same after vs. before the release of the TIDieR guidelines. Receiving funds from industrial organizations (p = 0.02) and having the study recorded within a registry platform (p = 0.08) were associated with better intervention reporting. Nutritional weight management interventions in MBS care are still below the desirable standards for reporting. The present study highlights the need to improve adequate reporting of such interventions, which would allow for greater replicability, evaluation through evidence synthesis studies, and transferability into clinical practice.


Subject(s)
Bariatric Surgery , Checklist , Humans , Bariatric Surgery/standards , Bariatric Surgery/methods , Checklist/standards , Obesity/surgery , Weight Reduction Programs/methods , Weight Reduction Programs/standards
9.
Mod Pathol ; 37(4): 100439, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38286221

ABSTRACT

This work puts forth and demonstrates the utility of a reporting framework for collecting and evaluating annotations of medical images used for training and testing artificial intelligence (AI) models in assisting detection and diagnosis. AI has unique reporting requirements, as shown by the AI extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklists and the proposed AI extensions to the Standards for Reporting Diagnostic Accuracy (STARD) and Transparent Reporting of a Multivariable Prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklists. AI for detection and/or diagnostic image analysis requires complete, reproducible, and transparent reporting of the annotations and metadata used in training and testing data sets. In an earlier work by other researchers, an annotation workflow and quality checklist for computational pathology annotations were proposed. In this manuscript, we operationalize this workflow into an evaluable quality checklist that applies to any reader-interpreted medical images, and we demonstrate its use for an annotation effort in digital pathology. We refer to this quality framework as the Collection and Evaluation of Annotations for Reproducible Reporting of Artificial Intelligence (CLEARR-AI).


Subject(s)
Artificial Intelligence , Checklist , Humans , Prognosis , Image Processing, Computer-Assisted , Research Design
10.
J Pediatr ; 264: 113769, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37821023

ABSTRACT

OBJECTIVE: To examine the associations between several potential predictors (child biologic, social, and family factors) and a positive screen for developmental delay using the Infant Toddler Checklist (ITC) at the 18-month health supervision visit in primary care. METHODS: This was a cross-sectional study of healthy children attending an 18-month health supervision visit in primary care. Parents completed a standardized questionnaire, addressing child, social, and family characteristics, and the ITC. Logistic regression analyses were used to assess the associations between predictors and a positive ITC. RESULTS: Among 2188 participants (45.5% female; mean age, 18.2 months), 285 (13%) had a positive ITC and 1903 (87%) had a negative ITC. The aOR for a positive ITC for male compared with female sex was 2.15 (95% CI, 1.63-2.83; P < .001). The aOR for birthweight was 0.65 per 1 kg increase (95% CI, 0.53-0.80; P < .001). The aOR for a family income of <$40,000 compared with ≥$150,000 was 3.50 (95% CI, 2.22-5.53; P < .001), and the aOR for family income between $40,000-$79,999 compared with ≥$150,000 was 1.88 (95% CI, 1.26-2.80; P = .002). CONCLUSIONS: Screening positive on the ITC may identify children at risk for the double jeopardy of developmental delay and social disadvantage and allow clinicians to intervene through monitoring, referral, and resource navigation for both child development and social needs. TRIAL REGISTRATION: Clinicaltrials.gov (NCT01869530).


Subject(s)
Checklist , Income , Infant , Humans , Male , Female , Child, Preschool , Cross-Sectional Studies , Child Development , Parents
11.
J Gen Intern Med ; 39(2): 272-276, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37670067

ABSTRACT

BACKGROUND: Diversity, equity, and inclusion (DEI) are at the core of publication ethics, and language around DEI has been shown to affect patient outcomes. Inclusive language is an important piece of effective communication and is one way to demonstrate and foster a welcoming, respectful, and accessible environment. Non-inclusive terminology in research may represent implicit bias, which is not typically corrected through introspection; thus, a systematic approach is needed in scientific writing. The prevalence of inclusive language guidance in leading medical journals is currently unknown. OBJECTIVE: Investigators assess the prevalence and quality of inclusive language guidelines in author instructions in highly cited English language medical journals. DESIGN: A cross-sectional review of author instructions from a convenience sample of 100 highly cited medical journals was completed in January 2023. SUBJECTS: Each journal's author instructions were reviewed for presence of inclusive language guidelines for manuscript submissions. MAIN MEASURES: Guidelines that included specific examples of inclusive language were defined as "strong." Author instructions were also reviewed for the Sex and Gender Equity in Research (SAGER) checklist, and each journal's publisher and impact factor (IF) were recorded. KEY RESULTS: The 100 journals reviewed had an IF range of 3.0-202.7 with a median IF = 19.5 (IQR 11.95, 38.68), and 28 unique publishers were represented. Inclusive language guidance was provided in 23% of medical journals reviewed. Of those, 20 (86.9%) provided strong guidance. Seven journals also recommended use of the SAGER checklist. CONCLUSION: Significant gaps still exist in ensuring use of inclusive language in medical journals.


Subject(s)
Periodicals as Topic , Publishing , Humans , Cross-Sectional Studies , Checklist , Language
12.
BMC Cancer ; 24(1): 743, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38890612

ABSTRACT

BACKGROUND: Breast cancer is a prevalent cancer characterized by its aggressive nature and potential to cause mortality among women. The rising mortality rates and women's inadequate perception of the disease's severity in developing countries highlight the importance of screening using conventional methods and reliable scales. Since the validity and reliability of the breast cancer perception scale (BCPS) have not been established in the Iranian context. Therefore, this study aimed to determine the measurement properties of the BCPS in women residing in Tabriz, Iran. METHODS: The present study comprised a cross-sectional design, encompassing a sample of 372 Iranian women. The participants were selected through a multi-stage cluster random sampling technique conducted over a period spanning from November 2022 to February 2023. The measurement properties of the Iranian version of BCPS were assessed following the guidelines outlined in the COSMIN checklist. This involved conducting various steps, including the translation process, reliability testing (internal consistency, test-retest reliability, and measurement error), and methodological tests for validity (content validity, face validity, construct validity, and hypothesis testing). The study also investigated the factors of responsiveness and interpretability. The presence of floor and ceiling effects was assessed. RESULTS: The internal consistency of the scale was assessed using Cronbach's alpha, yielding a satisfactory value of 0.68. Additionally, McDonald's omega (95% CI) was computed, resulting in a value of 0.70 (0.66 to 0.74). Furthermore, the test-retest reliability was evaluated, revealing a high intraclass correlation coefficient (ICC) of 0.97 (95% CI: 0.94 to 0.99). The CVI, CVR, and impact scores of the BCPS were determined to be 0.98, 0.95, and 3.70, respectively, indicating favorable levels of content and face validity. To assess construct validity, an examination of the Exploratory Factor Analysis (EFA) was conducted on a set of 24 items. This analysis revealed the presence of six distinct factors, which collectively accounted for 52% of the cumulative variance. The fit indices of the validity model (CFI = 0.91, NFI = 0.96, RFI = 0.94, TLI = 0.90, χ2/df = 2.03, RMSEA = 0.055 and SRMR = 0.055) were confirmed during the confirmatory factor analysis (CFA). The overall score of BCPS exhibited a ceiling effect of 0.3%. The floor effect observed in the overall score (BCPS) was found to be 0.5%. Concerning the validation of the hypothesis, Spearman's correlation coefficient of 0.55 was obtained between the BCPS and the QLICP-BR V2.0. This correlation value signifies a statistically significant association. Furthermore, it is worth noting that the minimum important change (MIC) of 3.92 exhibited a higher value compared to the smallest detectable change (SDC) of 3.70, thus suggesting a satisfactory level of response. CONCLUSIONS: The obtained findings suggest that the Iranian version of the BCPS demonstrates satisfactory psychometric properties for assessing the perception of breast cancer among Iranian women. Furthermore, it exhibits favorable responsiveness to clinical variations. Consequently, it can serve as a screening instrument for healthcare professionals to comprehend breast cancer and as a reliable tool in research endeavors.


Subject(s)
Breast Neoplasms , Checklist , Psychometrics , Humans , Female , Breast Neoplasms/psychology , Breast Neoplasms/diagnosis , Iran , Cross-Sectional Studies , Adult , Middle Aged , Reproducibility of Results , Psychometrics/methods , Surveys and Questionnaires/standards , Perception , Aged , Young Adult
13.
PLoS Biol ; 19(5): e3001177, 2021 05.
Article in English | MEDLINE | ID: mdl-33951050

ABSTRACT

In an effort to better utilize published evidence obtained from animal experiments, systematic reviews of preclinical studies are increasingly more common-along with the methods and tools to appraise them (e.g., SYstematic Review Center for Laboratory animal Experimentation [SYRCLE's] risk of bias tool). We performed a cross-sectional study of a sample of recent preclinical systematic reviews (2015-2018) and examined a range of epidemiological characteristics and used a 46-item checklist to assess reporting details. We identified 442 reviews published across 43 countries in 23 different disease domains that used 26 animal species. Reporting of key details to ensure transparency and reproducibility was inconsistent across reviews and within article sections. Items were most completely reported in the title, introduction, and results sections of the reviews, while least reported in the methods and discussion sections. Less than half of reviews reported that a risk of bias assessment for internal and external validity was undertaken, and none reported methods for evaluating construct validity. Our results demonstrate that a considerable number of preclinical systematic reviews investigating diverse topics have been conducted; however, their quality of reporting is inconsistent. Our study provides the justification and evidence to inform the development of guidelines for conducting and reporting preclinical systematic reviews.


Subject(s)
Peer Review, Research/methods , Peer Review, Research/standards , Research Design/standards , Animal Experimentation/standards , Animals , Bias , Checklist/standards , Drug Evaluation, Preclinical/methods , Drug Evaluation, Preclinical/standards , Empirical Research , Epidemiologic Methods , Epidemiology/trends , Humans , Peer Review, Research/trends , Publications , Reproducibility of Results , Research Design/trends
14.
Am J Obstet Gynecol ; 230(1): B2-B11, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37678646

ABSTRACT

Placenta accreta spectrum is a life-threatening complication of pregnancy that is underdiagnosed and can result in massive hemorrhage, disseminated intravascular coagulation, massive transfusion, surgical injury, multisystem organ failure, and even death. Given the rarity and complexity, most obstetrical hospitals and providers do not have comprehensive expertise in the diagnosis and management of placenta accreta spectrum. Emergency management, antenatal interdisciplinary planning, and system preparedness are key pillars of care for this life-threatening disorder. We present an updated sample checklist for emergent and unplanned cases, an antenatal planning worksheet for known or suspected cases, and a bundle of activities to improve system and team preparedness for placenta accreta spectrum.


Subject(s)
Placenta Accreta , Postpartum Hemorrhage , Pregnancy , Female , Humans , Cesarean Section/adverse effects , Placenta Accreta/therapy , Placenta Accreta/surgery , Postpartum Hemorrhage/diagnosis , Postpartum Hemorrhage/therapy , Postpartum Hemorrhage/etiology , Perinatology , Checklist , Hysterectomy/adverse effects , Retrospective Studies
15.
Pharmacol Res ; 199: 107015, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38036197

ABSTRACT

Existing reporting checklists lack the necessary level of detail and comprehensiveness to be used in guidelines on Chinese patent medicines (CPM). This study aims to develop a reporting guidance for CPM guidelines based on the Reporting Items of Practice Guidelines in Healthcare (RIGHT) statement. We extracted information from CPM guidelines, existing reporting standards for traditional Chinese medicine (TCM), and the RIGHT statement and its extensions to form the initial pool of reporting items for CPM guidelines. Seventeen experts from diverse disciplines participated in two rounds of Delphi process to refine and clarify the items. Finally, 18 authoritative consultants in the field of TCM and reporting guidelines reviewed and approved the RIGHT for CPM checklist. We added 16 new items and modified two items of the original RIGHT statement to form the RIGHT for CPM checklist, which contains 51 items grouped into seven sections and 23 topics. The new and revised items are distributed across four sections (Basic information, Background, Evidence, and Recommendations) and seven topics: title/subtitle (one new and one revised item), Registration information (one new item), Brief description of the health problem (four new items), Guideline development groups (one revised item), Health care questions (two new items), Recommendations (two new items), and Rationale/explanation for recommendations (six new items). The RIGHT for CPM checklist is committed to providing users with guidance for detailed, comprehensive and transparent reporting, and help practitioners better understand and implement CPM guidelines.


Subject(s)
Checklist , Medicine, Chinese Traditional
16.
Psychooncology ; 33(3): e6318, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38429990

ABSTRACT

OBJECTIVE: Clinical supervision of oncology clinicians by psycho-oncologists is an important means of psychosocial competence transfer and support. Research on this essential liaison activity remains scarce. The aim of this study was to assess the impact of supervision on oncology clinicians' feelings towards patients presented in supervision. METHODS: Oncology clinicians' (n = 23) feelings towards patients presented in supervision were assessed with the Feeling Word Checklist (FWC). The FWC was filled in by supervisees prior and after their supervision sessions (n = 91), which were conducted by experienced supervisors (n = 6). Pre- post-modification of feelings was evaluated based on a selection of FWC items, which were beforehand considered as likely to change in a beneficial supervision. Items were evaluated on session level using t-tests for dependent groups. Composite scores were calculated for feelings expected to raise and feelings expected to decrease and analysed on the level of supervisees. RESULTS: Feelings related to threats, loss of orientation or hostility such as "anxious", "overwhelmed", "impotent", "confused", "angry", "depreciated" and "guilty" decreased significantly after supervision, while feelings related to the resume of the relationship ("attentive", "happy"), a better understanding of the patient ("empathic"), a regain of control ("confident") and being "useful" significantly increased. Feeling "interested" and "calm" remained unchanged. Significant increase or decrease in the composite scores for supervisees confirmed these results. CONCLUSIONS: This study demonstrates modification of feelings towards patients presented in supervision. This modification corresponds to the normative, formative, and especially restorative function (support of the clinician) of supervision.


Subject(s)
Checklist , Emotions , Male , Humans , Anxiety , Anger , Guilt
17.
Eur Radiol ; 34(4): 2805-2815, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37740080

ABSTRACT

OBJECTIVE: To evaluate the usage of a well-known and widely adopted checklist, Checklist for Artificial Intelligence in Medical imaging (CLAIM), for self-reporting through a systematic analysis of its citations. METHODS: Google Scholar, Web of Science, and Scopus were used to search for citations (date, 29 April 2023). CLAIM's use for self-reporting with proof (i.e., filled-out checklist) and other potential use cases were systematically assessed in research papers. Eligible papers were evaluated independently by two readers, with the help of automatic annotation. Item-by-item confirmation analysis on papers with checklist proof was subsequently performed. RESULTS: A total of 391 unique citations were identified from three databases. Of the 118 papers included in this study, 12 (10%) provided a proof of self-reported CLAIM checklist. More than half (70; 59%) only mentioned some sort of adherence to CLAIM without providing any proof in the form of a checklist. Approximately one-third (36; 31%) cited the CLAIM for reasons unrelated to their reporting or methodological adherence. Overall, the claims on 57 to 93% of the items per publication were confirmed in the item-by-item analysis, with a mean and standard deviation of 81% and 10%, respectively. CONCLUSION: Only a small proportion of the publications used CLAIM as checklist and supplied filled-out documentation; however, the self-reported checklists may contain errors and should be approached cautiously. We hope that this systematic citation analysis would motivate artificial intelligence community about the importance of proper self-reporting, and encourage researchers, journals, editors, and reviewers to take action to ensure the proper usage of checklists. CLINICAL RELEVANCE STATEMENT: Only a small percentage of the publications used CLAIM for self-reporting with proof (i.e., filled-out checklist). However, the filled-out checklist proofs may contain errors, e.g., false claims of adherence, and should be approached cautiously. These may indicate inappropriate usage of checklists and necessitate further action by authorities. KEY POINTS: • Of 118 eligible papers, only 12 (10%) followed the CLAIM checklist for self-reporting with proof (i.e., filled-out checklist). More than half (70; 59%) only mentioned some kind of adherence without providing any proof. • Overall, claims on 57 to 93% of the items were valid in item-by-item confirmation analysis, with a mean and standard deviation of 81% and 10%, respectively. • Even with the checklist proof, the items declared may contain errors and should be approached cautiously.


Subject(s)
Artificial Intelligence , Checklist , Humans , Diagnostic Imaging , Radiography
18.
Eur Radiol ; 34(8): 5028-5040, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38180530

ABSTRACT

OBJECTIVE: To evaluate the use of reporting checklists and quality scoring tools for self-reporting purposes in radiomics literature. METHODS: Literature search was conducted in PubMed (date, April 23, 2023). The radiomics literature was sampled at random after a sample size calculation with a priori power analysis. A systematic assessment for self-reporting, including the use of documentation such as completed checklists or quality scoring tools, was conducted in original research papers. These eligible papers underwent independent evaluation by a panel of nine readers, with three readers assigned to each paper. Automatic annotation was used to assist in this process. Then, a detailed item-by-item confirmation analysis was carried out on papers with checklist documentation, with independent evaluation of two readers. RESULTS: The sample size calculation yielded 117 papers. Most of the included papers were retrospective (94%; 110/117), single-center (68%; 80/117), based on their private data (89%; 104/117), and lacked external validation (79%; 93/117). Only seven papers (6%) had at least one self-reported document (Radiomics Quality Score (RQS), Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD), or Checklist for Artificial Intelligence in Medical Imaging (CLAIM)), with a statistically significant binomial test (p < 0.001). Median rate of confirmed items for all three documents was 81% (interquartile range, 6). For quality scoring tools, documented scores were higher than suggested scores, with a mean difference of - 7.2 (standard deviation, 6.8). CONCLUSION: Radiomic publications often lack self-reported checklists or quality scoring tools. Even when such documents are provided, it is essential to be cautious, as the accuracy of the reported items or scores may be questionable. CLINICAL RELEVANCE STATEMENT: Current state of radiomic literature reveals a notable absence of self-reporting with documentation and inaccurate reporting practices. This critical observation may serve as a catalyst for motivating the radiomics community to adopt and utilize such tools appropriately, thereby fostering rigor, transparency, and reproducibility of their research, moving the field forward. KEY POINTS: • In radiomics literature, there has been a notable absence of self-reporting with documentation. • Even if such documents are provided, it is critical to exercise caution because the accuracy of the reported items or scores may be questionable. • Radiomics community needs to be motivated to adopt and appropriately utilize the reporting checklists and quality scoring tools.


Subject(s)
Checklist , Self Report , Humans , Radiology/standards , Radiology/methods , Diagnostic Imaging/methods , Diagnostic Imaging/standards , Radiomics
19.
Int J Behav Nutr Phys Act ; 21(1): 30, 2024 Mar 13.
Article in English | MEDLINE | ID: mdl-38481238

ABSTRACT

Increasing physical activity in patients offers dual benefits, fostering improved patient health and recovery, while also bolstering healthcare system efficiency by minimizing costs related to extended hospital stays, complications, and readmissions. Wearable activity trackers offer valuable opportunities to enhance physical activity across various healthcare settings and among different patient groups. However, their integration into healthcare faces multiple implementation challenges related to the devices themselves, patients, clinicians, and systemic factors. This article presents the Wearable Activity Tracker Checklist for Healthcare (WATCH), which was recently developed through an international Delphi study. The WATCH provides a comprehensive framework for implementation and evaluation of wearable activity trackers in healthcare. It covers the purpose and setting for usage; patient, provider, and support personnel roles; selection of relevant metrics; device specifications; procedural steps for issuance and maintenance; data management; timelines; necessary adaptations for specific scenarios; and essential resources (such as education and training) for effective implementation. The WATCH is designed to support the implementation of wearable activity trackers across a wide range of healthcare populations and settings, and in those with varied levels of experience. The overarching goal is to support broader, sustained, and systematic use of wearable activity trackers in healthcare, therefore fostering enhanced physical activity promotion and improved patient outcomes.


Subject(s)
Checklist , Fitness Trackers , Humans , Exercise , Motivation , Delivery of Health Care
20.
J Surg Res ; 300: 133-140, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38810526

ABSTRACT

INTRODUCTION: The use of survey methodology in surgical research has proliferated in recent years, but the quality of these surveys and of their reporting is understudied. METHODS: We conducted a comprehensive review of surgical survey literature (January 2022-July 2023) via PubMed in July 2023. Articles which (1) reported data gleaned from a survey, (2) were published in an English language journal, (3) targeted survey respondents in the United States or Canada, and (4) pertained to general surgery specialties were included. We assessed quality of survey reports using the Checklist for Reporting Of Survey Studies (CROSS) guidelines. Articles were evaluated for concordance with CROSS using a dichotomous (yes or no) scale. RESULTS: Initial literature search yielded 481 articles; 57 articles were included in analysis based on the inclusion criteria. The mean response rate was 37% (range 0.62%-98%). The majority of surveys were administered electronically (n = 50, 87.8%). No publications adhered to all 40 CROSS items; on average, publications met 61.2% of items applicable to that study. Articles were most likely to adhere to reporting criteria for title and abstract (mean adherence 99.1%), introduction (99.1%), and discussion (92.4%). Articles were least adherent to items related to methodology (42.6%) and moderately adherent to items related to results (76.6%). Only five articles cited CROSS guidelines or another standardized survey reporting tool (10.5%). CONCLUSIONS: Our analysis demonstrates that CROSS reporting guidelines for survey research have not been adopted widely. Surveys reported in surgical literature may be of variable quality. Increased adherence to guidelines could improve development and dissemination of surveys done by surgeons.


Subject(s)
Checklist , Humans , Surveys and Questionnaires/statistics & numerical data , Checklist/standards , Canada , General Surgery/standards , United States , Biomedical Research/standards , Biomedical Research/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL