Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Head Neck Pathol ; 18(1): 38, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38727841

RESUMO

INTRODUCTION: Oral epithelial dysplasia (OED) is a precancerous histopathological finding which is considered the most important prognostic indicator for determining the risk of malignant transformation into oral squamous cell carcinoma (OSCC). The gold standard for diagnosis and grading of OED is through histopathological examination, which is subject to inter- and intra-observer variability, impacting accurate diagnosis and prognosis. The aim of this review article is to examine the current advances in digital pathology for artificial intelligence (AI) applications used for OED diagnosis. MATERIALS AND METHODS: We included studies that used AI for diagnosis, grading, or prognosis of OED on histopathology images or intraoral clinical images. Studies utilizing imaging modalities other than routine light microscopy (e.g., scanning electron microscopy), or immunohistochemistry-stained histology slides, or immunofluorescence were excluded from the study. Studies not focusing on oral dysplasia grading and diagnosis, e.g., to discriminate OSCC from normal epithelial tissue were also excluded. RESULTS: A total of 24 studies were included in this review. Nineteen studies utilized deep learning (DL) convolutional neural networks for histopathological OED analysis, and 4 used machine learning (ML) models. Studies were summarized by AI method, main study outcomes, predictive value for malignant transformation, strengths, and limitations. CONCLUSION: ML/DL studies for OED grading and prediction of malignant transformation are emerging as promising adjunctive tools in the field of digital pathology. These adjunctive objective tools can ultimately aid the pathologist in more accurate diagnosis and prognosis prediction. However, further supportive studies that focus on generalization, explainable decisions, and prognosis prediction are needed.


Assuntos
Inteligência Artificial , Neoplasias Bucais , Lesões Pré-Cancerosas , Humanos , Lesões Pré-Cancerosas/patologia , Lesões Pré-Cancerosas/diagnóstico , Neoplasias Bucais/patologia , Neoplasias Bucais/diagnóstico , Mucosa Bucal/patologia
2.
Mod Pathol ; 37(1): 100369, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37890670

RESUMO

Generative adversarial networks (GANs) have gained significant attention in the field of image synthesis, particularly in computer vision. GANs consist of a generative model and a discriminative model trained in an adversarial setting to generate realistic and novel data. In the context of image synthesis, the generator produces synthetic images, whereas the discriminator determines their authenticity by comparing them with real examples. Through iterative training, the generator allows the creation of images that are indistinguishable from real ones, leading to high-quality image generation. Considering their success in computer vision, GANs hold great potential for medical diagnostic applications. In the medical field, GANs can generate images of rare diseases, aid in learning, and be used as visualization tools. GANs can leverage unlabeled medical images, which are large in size, numerous in quantity, and challenging to annotate manually. GANs have demonstrated remarkable capabilities in image synthesis and have the potential to significantly impact digital histopathology. This review article focuses on the emerging use of GANs in digital histopathology, examining their applications and potential challenges. Histopathology plays a crucial role in disease diagnosis, and GANs can contribute by generating realistic microscopic images. However, ethical considerations arise because of the reliance on synthetic or pseudogenerated images. Therefore, the manuscript also explores the current limitations and highlights the ethical considerations associated with the use of this technology. In conclusion, digital histopathology has seen an emerging use of GANs for image enhancement, such as color (stain) normalization, virtual staining, and ink/marker removal. GANs offer significant potential in transforming digital pathology when applied to specific and narrow tasks (preprocessing enhancements). Evaluating data quality, addressing biases, protecting privacy, ensuring accountability and transparency, and developing regulation are imperative to ensure the ethical application of GANs.


Assuntos
Corantes , Confiabilidade dos Dados , Humanos , Coloração e Rotulagem , Processamento de Imagem Assistida por Computador
3.
Artigo em Inglês | MEDLINE | ID: mdl-37770329

RESUMO

OBJECTIVE: We leveraged an artificial intelligence deep-learning convolutional neural network (DL CNN) to detect calcified carotid artery atheromas (CCAAs) on cone beam computed tomography (CBCT) images. STUDY DESIGN: We obtained 137 full-volume CBCT scans with previously diagnosed CCAAs. The DL model was trained on 170 single axial CBCT slices, 90 with extracranial CCAAs and 80 with intracranial CCAAs. A board-certified oral and maxillofacial radiologist confirmed the presence of each CCAA. Transfer learning through a U-Net-based CNN architecture was utilized. Data allocation was 60% training, 10% validation, and 30% testing. We determined the accuracy of the DL model in detecting CCAA by calculating the mean training and validation accuracy and the area under the receiver operating characteristic curve (AUC). We reserved 5 randomly selected unseen full CBCT volumes for final testing. RESULTS: The mean training and validation accuracy of the model in detecting extracranial CCAAs was 92% and 82%, respectively, and the AUC was 0.84 with 1.0 sensitivity and 0.69 specificity. The mean training and validation accuracy in detecting intracranial CCAAs was 61% and 70%, respectively, and the AUC was 0.5 with 0.93 sensitivity and 0.08 specificity. Testing of full-volume scans yielded an AUC of 0.72 and 0.55 for extracranial and intracranial CCAAs, respectively. CONCLUSION: Our DL model showed excellent discrimination in detecting extracranial CCAAs on axial CBCT images and acceptable discrimination on full-volumes but poor discrimination in detecting intracranial CCAAs, for which further research is required.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA