RESUMO
OBJECTIVE: Federated Learning (FL) enables collaborative training of artificial intelligence (AI) models from multiple data sources without directly sharing data. Due to the large amount of sensitive data in dentistry, FL may be particularly relevant for oral and dental research and applications. This study, for the first time, employed FL for a dental task, automated tooth segmentation on panoramic radiographs. METHODS: We employed a dataset of 4,177 panoramic radiographs collected from nine different centers (n = 143 to n = 1881 per center) across the globe and used FL to train a machine learning model for tooth segmentation. FL performance was compared against Local Learning (LL), i.e., training models on isolated data from each center (assuming data sharing not to be an option). Further, the performance gap to Central Learning (CL), i.e., training on centrally pooled data (based on data sharing agreements) was quantified. Generalizability of models was evaluated on a pooled test dataset from all centers. RESULTS: For 8 out of 9 centers, FL outperformed LL with statistical significance (p<0.05); only the center providing the largest amount of data FL did not have such an advantage. For generalizability, FL outperformed LL across all centers. CL surpassed both FL and LL for performance and generalizability. CONCLUSION: If data pooling (for CL) is not feasible, FL is shown to be a useful alternative to train performant and, more importantly, generalizable deep learning models in dentistry, where data protection barriers are high. CLINICAL SIGNIFICANCE: This study proves the validity and utility of FL in the field of dentistry, which encourages researchers to adopt this method to improve the generalizability of dental AI models and ease their transition to the clinical environment.
Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Radiografia Panorâmica , PesquisadoresRESUMO
Despite technological advances in the analysis of digital images for medical consultations, many health information systems lack the ability to correlate textual descriptions of image findings linked to the actual images. Images and reports often reside in separate silos in the medical record throughout the process of image viewing, report authoring, and report consumption. Forward-thinking centers and early adopters have created interactive reports with multimedia elements and embedded hyperlinks in reports that connect the narrative text with the related source images and measurements. Most of these solutions rely on proprietary single-vendor systems for viewing and reporting in the absence of any encompassing industry standards to facilitate interoperability with the electronic health record (EHR) and other systems. International standards have enabled the digitization of image acquisition, storage, viewing, and structured reporting. These provide the foundation to discuss enhanced reporting. Lessons learned in the digital transformation of radiology and pathology can serve as a basis for interactive multimedia reporting (IMR) across image-centric medical specialties. This paper describes the standard-based infrastructure and communications to fulfill recently defined clinical requirements through a consensus from an international workgroup of multidisciplinary medical specialists, informaticists, and industry participants. These efforts have led toward the development of an Integrating the Healthcare Enterprise (IHE) profile that will serve as a foundation for interoperable interactive multimedia reporting.