Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Dent ; 150: 105318, 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39182639

RESUMEN

OBJECTIVES: To improve reporting and comparability as well as to reduce bias in dental computer vision studies, we aimed to develop a Core Outcome Measures Set (COMS) for this field. The COMS was derived consensus based as part of the WHO/ITU/WIPO Global Initiative AI for Health (WHO/ITU/WIPO AI4H). METHODS: We first assessed existing guidance documents of diagnostic accuracy studies and conducted interviews with experts in the field. The resulting list of outcome measures was mapped against computer vision modeling tasks, clinical fields and reporting levels. The resulting systematization focused on providing relevant outcome measures whilst retaining details for meta-research and technical replication, displaying recommendations towards (1) levels of reporting for different clinical fields and tasks, and (2) outcome measures. The COMS was consented using a 2-staged e-Delphi, with 26 participants from various IADR groups, the WHO/ITU/WIPO AI4H, ADEA and AAOMFR. RESULTS: We assigned agreed levels of reporting to different computer vision tasks. We agreed that human expert assessment and diagnostic accuracy considerations are the only feasible method to achieve clinically meaningful evaluation levels. Studies should at least report on eight core outcome measures: confusion matrix, accuracy, sensitivity, specificity, precision, F-1 score, area-under-the-receiver-operating-characteristic-curve, and area-under-the-precision-recall-curve. CONCLUSION: Dental researchers should aim to report computer vision studies along the outlined COMS. Reviewers and editors may consider the defined COMS when assessing studies, and authors are recommended to justify when not employing the COMS. CLINICAL SIGNIFICANCE: Comparing and synthesizing dental computer vision studies is hampered by the variety of reported outcome measures. Adherence to the defined COMS is expected to increase comparability across studies, enable synthesis, and reduce selective reporting.

2.
J Dent ; 135: 104556, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37209769

RESUMEN

OBJECTIVE: Federated Learning (FL) enables collaborative training of artificial intelligence (AI) models from multiple data sources without directly sharing data. Due to the large amount of sensitive data in dentistry, FL may be particularly relevant for oral and dental research and applications. This study, for the first time, employed FL for a dental task, automated tooth segmentation on panoramic radiographs. METHODS: We employed a dataset of 4,177 panoramic radiographs collected from nine different centers (n = 143 to n = 1881 per center) across the globe and used FL to train a machine learning model for tooth segmentation. FL performance was compared against Local Learning (LL), i.e., training models on isolated data from each center (assuming data sharing not to be an option). Further, the performance gap to Central Learning (CL), i.e., training on centrally pooled data (based on data sharing agreements) was quantified. Generalizability of models was evaluated on a pooled test dataset from all centers. RESULTS: For 8 out of 9 centers, FL outperformed LL with statistical significance (p<0.05); only the center providing the largest amount of data FL did not have such an advantage. For generalizability, FL outperformed LL across all centers. CL surpassed both FL and LL for performance and generalizability. CONCLUSION: If data pooling (for CL) is not feasible, FL is shown to be a useful alternative to train performant and, more importantly, generalizable deep learning models in dentistry, where data protection barriers are high. CLINICAL SIGNIFICANCE: This study proves the validity and utility of FL in the field of dentistry, which encourages researchers to adopt this method to improve the generalizability of dental AI models and ease their transition to the clinical environment.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Humanos , Radiografía Panorámica , Investigadores
3.
Diagnostics (Basel) ; 12(8)2022 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-36010318

RESUMEN

The detection and classification of cystic lesions of the jaw is of high clinical relevance and represents a topic of interest in medical artificial intelligence research. The human clinical diagnostic reasoning process uses contextual information, including the spatial relation of the detected lesion to other anatomical structures, to establish a preliminary classification. Here, we aimed to emulate clinical diagnostic reasoning step by step by using a combined object detection and image segmentation approach on panoramic radiographs (OPGs). We used a multicenter training dataset of 855 OPGs (all positives) and an evaluation set of 384 OPGs (240 negatives). We further compared our models to an international human control group of ten dental professionals from seven countries. The object detection model achieved an average precision of 0.42 (intersection over union (IoU): 0.50, maximal detections: 100) and an average recall of 0.394 (IoU: 0.50-0.95, maximal detections: 100). The classification model achieved a sensitivity of 0.84 for odontogenic cysts and 0.56 for non-odontogenic cysts as well as a specificity of 0.59 for odontogenic cysts and 0.84 for non-odontogenic cysts (IoU: 0.30). The human control group achieved a sensitivity of 0.70 for odontogenic cysts, 0.44 for non-odontogenic cysts, and 0.56 for OPGs without cysts as well as a specificity of 0.62 for odontogenic cysts, 0.95 for non-odontogenic cysts, and 0.76 for OPGs without cysts. Taken together, our results show that a combined object detection and image segmentation approach is feasible in emulating the human clinical diagnostic reasoning process in classifying cystic lesions of the jaw.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA