Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros

Base de datos
Tipo del documento
Intervalo de año de publicación
1.
J Oral Pathol Med ; 53(9): 551-566, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39256895

RESUMEN

BACKGROUND: Artificial intelligence (AI)-based tools have shown promise in histopathology image analysis in improving the accuracy of oral squamous cell carcinoma (OSCC) detection with intent to reduce human error. OBJECTIVES: This systematic review and meta-analysis evaluated deep learning (DL) models for OSCC detection on histopathology images by assessing common diagnostic performance evaluation metrics for AI-based medical image analysis studies. METHODS: Diagnostic accuracy studies that used DL models for the analysis of histopathological images of OSCC compared to the reference standard were analyzed. Six databases (PubMed, Google Scholar, Scopus, Embase, ArXiv, and IEEE) were screened for publications without any time limitation. The QUADAS-2 tool was utilized to assess quality. The meta-analyses included only studies that reported true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN) in their test sets. RESULTS: Of 1267 screened studies, 17 studies met the final inclusion criteria. DL methods such as image classification (n = 11) and segmentation (n = 3) were used, and some studies used combined methods (n = 3). On QUADAS-2 assessment, only three studies had a low risk of bias across all applicability domains. For segmentation studies, 0.97 was reported for accuracy, 0.97 for sensitivity, 0.98 for specificity, and 0.92 for Dice. For classification studies, accuracy was reported as 0.99, sensitivity 0.99, specificity 1.0, Dice 0.95, F1 score 0.98, and AUC 0.99. Meta-analysis showed pooled estimates of 0.98 sensitivity and 0.93 specificity. CONCLUSION: Application of AI-based classification and segmentation methods on image analysis represents a fundamental shift in digital pathology. DL approaches demonstrated significantly high accuracy for OSCC detection on histopathology, comparable to that of human experts in some studies. Although AI-based models cannot replace a well-trained pathologist, they can assist through improving the objectivity and repeatability of the diagnosis while reducing variability and human error as a consequence of pathologist burnout.


Asunto(s)
Carcinoma de Células Escamosas , Aprendizaje Profundo , Neoplasias de la Boca , Humanos , Neoplasias de la Boca/patología , Neoplasias de la Boca/diagnóstico por imagen , Carcinoma de Células Escamosas/patología , Carcinoma de Células Escamosas/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Inteligencia Artificial
2.
Head Neck Pathol ; 18(1): 38, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38727841

RESUMEN

INTRODUCTION: Oral epithelial dysplasia (OED) is a precancerous histopathological finding which is considered the most important prognostic indicator for determining the risk of malignant transformation into oral squamous cell carcinoma (OSCC). The gold standard for diagnosis and grading of OED is through histopathological examination, which is subject to inter- and intra-observer variability, impacting accurate diagnosis and prognosis. The aim of this review article is to examine the current advances in digital pathology for artificial intelligence (AI) applications used for OED diagnosis. MATERIALS AND METHODS: We included studies that used AI for diagnosis, grading, or prognosis of OED on histopathology images or intraoral clinical images. Studies utilizing imaging modalities other than routine light microscopy (e.g., scanning electron microscopy), or immunohistochemistry-stained histology slides, or immunofluorescence were excluded from the study. Studies not focusing on oral dysplasia grading and diagnosis, e.g., to discriminate OSCC from normal epithelial tissue were also excluded. RESULTS: A total of 24 studies were included in this review. Nineteen studies utilized deep learning (DL) convolutional neural networks for histopathological OED analysis, and 4 used machine learning (ML) models. Studies were summarized by AI method, main study outcomes, predictive value for malignant transformation, strengths, and limitations. CONCLUSION: ML/DL studies for OED grading and prediction of malignant transformation are emerging as promising adjunctive tools in the field of digital pathology. These adjunctive objective tools can ultimately aid the pathologist in more accurate diagnosis and prognosis prediction. However, further supportive studies that focus on generalization, explainable decisions, and prognosis prediction are needed.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Boca , Lesiones Precancerosas , Humanos , Lesiones Precancerosas/patología , Lesiones Precancerosas/diagnóstico , Neoplasias de la Boca/patología , Neoplasias de la Boca/diagnóstico , Mucosa Bucal/patología
3.
Mod Pathol ; 37(1): 100369, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37890670

RESUMEN

Generative adversarial networks (GANs) have gained significant attention in the field of image synthesis, particularly in computer vision. GANs consist of a generative model and a discriminative model trained in an adversarial setting to generate realistic and novel data. In the context of image synthesis, the generator produces synthetic images, whereas the discriminator determines their authenticity by comparing them with real examples. Through iterative training, the generator allows the creation of images that are indistinguishable from real ones, leading to high-quality image generation. Considering their success in computer vision, GANs hold great potential for medical diagnostic applications. In the medical field, GANs can generate images of rare diseases, aid in learning, and be used as visualization tools. GANs can leverage unlabeled medical images, which are large in size, numerous in quantity, and challenging to annotate manually. GANs have demonstrated remarkable capabilities in image synthesis and have the potential to significantly impact digital histopathology. This review article focuses on the emerging use of GANs in digital histopathology, examining their applications and potential challenges. Histopathology plays a crucial role in disease diagnosis, and GANs can contribute by generating realistic microscopic images. However, ethical considerations arise because of the reliance on synthetic or pseudogenerated images. Therefore, the manuscript also explores the current limitations and highlights the ethical considerations associated with the use of this technology. In conclusion, digital histopathology has seen an emerging use of GANs for image enhancement, such as color (stain) normalization, virtual staining, and ink/marker removal. GANs offer significant potential in transforming digital pathology when applied to specific and narrow tasks (preprocessing enhancements). Evaluating data quality, addressing biases, protecting privacy, ensuring accountability and transparency, and developing regulation are imperative to ensure the ethical application of GANs.


Asunto(s)
Colorantes , Exactitud de los Datos , Humanos , Coloración y Etiquetado , Procesamiento de Imagen Asistido por Computador
4.
Artículo en Inglés | MEDLINE | ID: mdl-37770329

RESUMEN

OBJECTIVE: We leveraged an artificial intelligence deep-learning convolutional neural network (DL CNN) to detect calcified carotid artery atheromas (CCAAs) on cone beam computed tomography (CBCT) images. STUDY DESIGN: We obtained 137 full-volume CBCT scans with previously diagnosed CCAAs. The DL model was trained on 170 single axial CBCT slices, 90 with extracranial CCAAs and 80 with intracranial CCAAs. A board-certified oral and maxillofacial radiologist confirmed the presence of each CCAA. Transfer learning through a U-Net-based CNN architecture was utilized. Data allocation was 60% training, 10% validation, and 30% testing. We determined the accuracy of the DL model in detecting CCAA by calculating the mean training and validation accuracy and the area under the receiver operating characteristic curve (AUC). We reserved 5 randomly selected unseen full CBCT volumes for final testing. RESULTS: The mean training and validation accuracy of the model in detecting extracranial CCAAs was 92% and 82%, respectively, and the AUC was 0.84 with 1.0 sensitivity and 0.69 specificity. The mean training and validation accuracy in detecting intracranial CCAAs was 61% and 70%, respectively, and the AUC was 0.5 with 0.93 sensitivity and 0.08 specificity. Testing of full-volume scans yielded an AUC of 0.72 and 0.55 for extracranial and intracranial CCAAs, respectively. CONCLUSION: Our DL model showed excellent discrimination in detecting extracranial CCAAs on axial CBCT images and acceptable discrimination on full-volumes but poor discrimination in detecting intracranial CCAAs, for which further research is required.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA