RESUMO
Purpose: The aim of our study is to evaluate artificial intelligence (AI) support in pelvic fracture diagnosis on X-rays, focusing on performance, workflow integration and radiologists' feedback in a spoke emergency hospital. Materials and methods: Between August and November 2021, a total of 235 sites of fracture or suspected fracture were evaluated and enrolled in the prospective study. Radiologist's specificity, sensibility accuracy, positive and negative predictive values were compared to AI. Cohen's kappa was used to calculate the agreement between AI and radiologist. We also reviewed the AI workflow integration process, focusing on potential issues and assessed radiologists' opinion on AI via a survey. Results: The radiologist performance in accuracy, sensitivity and specificity was better than AI but McNemar test demonstrated no statistically significant difference between AI and radiologist's performance (p = 0.32). Calculated Cohen's K of 0.64. Conclusion: Contrary to expectations, our preliminary results did not prove a real improvement of patient outcome nor in reporting time but demonstrated AI high NPV (94,62%) and non-inferiority to radiologist performance. Moreover, the commercially available AI algorithm used in our study automatically learn from data and so we expect a progressive performance improvement. AI could be considered as a promising tool to rule-out fractures (especially when used as a "second reader") and to prioritize positive cases, especially in increasing workload scenarios (ED, nightshifts) but further research is needed to evaluate the real impact on the clinical practice.
RESUMO
OBJECTIVES: To develop a structured reporting (SR) template for whole-body CT examinations of polytrauma patients, based on the consensus of a panel of emergency radiology experts from the Italian Society of Medical and Interventional Radiology. METHODS: A multi-round Delphi method was used to quantify inter-panelist agreement for all SR sections. Internal consistency for each section and quality analysis in terms of average inter-item correlation were evaluated by means of the Cronbach's alpha (Cα) correlation coefficient. RESULTS: The final SR form included 118 items (6 in the "Patient Clinical Data" section, 4 in the "Clinical Evaluation" section, 9 in the "Imaging Protocol" section, and 99 in the "Report" section). The experts' overall mean score and sum of scores were 4.77 (range 1-5) and 257.56 (range 206-270) in the first Delphi round, and 4.96 (range 4-5) and 208.44 (range 200-210) in the second round, respectively. In the second Delphi round, the experts' overall mean score was higher than in the first round, and standard deviation was lower (3.11 in the second round vs 19.71 in the first round), reflecting a higher expert agreement in the second round. Moreover, Cα was higher in the second round than in the first round (0.97 vs 0.87). CONCLUSIONS: Our SR template for whole-body CT examinations of polytrauma patients is based on a strong agreement among panel experts in emergency radiology and could improve communication between radiologists and the trauma team.