Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Clin Oral Investig ; 28(7): 381, 2024 Jun 18.
Artículo en Inglés | MEDLINE | ID: mdl-38886242

RESUMEN

OBJECTIVES: Tooth extraction is one of the most frequently performed medical procedures. The indication is based on the combination of clinical and radiological examination and individual patient parameters and should be made with great care. However, determining whether a tooth should be extracted is not always a straightforward decision. Moreover, visual and cognitive pitfalls in the analysis of radiographs may lead to incorrect decisions. Artificial intelligence (AI) could be used as a decision support tool to provide a score of tooth extractability. MATERIAL AND METHODS: Using 26,956 single teeth images from 1,184 panoramic radiographs (PANs), we trained a ResNet50 network to classify teeth as either extraction-worthy or preservable. For this purpose, teeth were cropped with different margins from PANs and annotated. The usefulness of the AI-based classification as well that of dentists was evaluated on a test dataset. In addition, the explainability of the best AI model was visualized via a class activation mapping using CAMERAS. RESULTS: The ROC-AUC for the best AI model to discriminate teeth worthy of preservation was 0.901 with 2% margin on dental images. In contrast, the average ROC-AUC for dentists was only 0.797. With a 19.1% tooth extractions prevalence, the AI model's PR-AUC was 0.749, while the dentist evaluation only reached 0.589. CONCLUSION: AI models outperform dentists/specialists in predicting tooth extraction based solely on X-ray images, while the AI performance improves with increasing contextual information. CLINICAL RELEVANCE: AI could help monitor at-risk teeth and reduce errors in indications for extractions.


Asunto(s)
Inteligencia Artificial , Radiografía Panorámica , Extracción Dental , Humanos , Odontólogos , Femenino , Masculino , Adulto
2.
Head Neck ; 2024 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-38454656

RESUMEN

BACKGROUND: Early detection of oral cancer (OC) or its precursors is the most effective measure to improve outcome. The reasons for missing them on conventional oral examination (COE) or possible countermeasures are still unclear. METHODS: In this randomized controlled trial, we investigated the effects of standardized oral examination (SOE) compared to COE. 49 dentists, specialists, and dental students wearing an eye tracker had to detect 10 simulated oral lesions drawn into a volunteer's oral cavity. RESULTS: SOE had a higher detection rate at 85.4% sensitivity compared to 78.8% in the control (p = 0.017) due to higher completeness (p < 0.001). Detection rate correlated with examination duration (p = 0.002). CONCLUSIONS: A standardized approach can improve systematics and thereby detection rates in oral examinations. It should take at least 5 min. Perceptual and cognitive errors and improper technique cause oral lesions to be missed. Its wide implementation could be an additional strategy to enhance early detection of OC.

3.
IEEE Trans Med Imaging ; 27(12): 1769-81, 2008 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-19033093

RESUMEN

This paper describes the use of color image analysis to automatically discriminate between oesophagus, stomach, small intestine, and colon tissue in wireless capsule endoscopy (WCE). WCE uses "pill-cam" technology to recover color video imagery from the entire gastrointestinal tract. Accurately reviewing and reporting this data is a vital part of the examination, but it is tedious and time consuming. Automatic image analysis tools play an important role in supporting the clinician and speeding up this process. Our approach first divides the WCE image into subimages and rejects all subimages in which tissue is not clearly visible. We then create a feature vector combining color, texture, and motion information of the entire image and valid subimages. Color features are derived from hue saturation histograms, compressed using a hybrid transform, incorporating the discrete cosine transform and principal component analysis. A second feature combining color and texture information is derived using local binary patterns. The video is segmented into meaningful parts using support vector or multivariate Gaussian classifiers built within the framework of a hidden Markov model. We present experimental results that demonstrate the effectiveness of this method.


Asunto(s)
Endoscopía Capsular/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Tracto Gastrointestinal Inferior/anatomía & histología , Tracto Gastrointestinal Superior/anatomía & histología , Algoritmos , Endoscopios en Cápsulas , Color , Colorimetría , Compresión de Datos/métodos , Humanos , Cadenas de Markov , Distribución Normal , Reconocimiento de Normas Patrones Automatizadas/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA