Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
J Pathol Inform ; 14: 100159, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36506813

RESUMO

Background: Skin cancers are the most common malignancies diagnosed worldwide. While the early detection and treatment of pre-cancerous and cancerous skin lesions can dramatically improve outcomes, factors such as a global shortage of pathologists, increased workloads, and high rates of diagnostic discordance underscore the need for techniques that improve pathology workflows. Although AI models are now being used to classify lesions from whole slide images (WSIs), diagnostic performance rarely surpasses that of expert pathologists. Objectives: The objective of the present study was to create an AI model to detect and classify skin lesions with a higher degree of sensitivity than previously demonstrated, with potential to match and eventually surpass expert pathologists to improve clinical workflows. Methods: We combined supervised learning (SL) with semi-supervised learning (SSL) to produce an end-to-end multi-level skin detection system that not only detects 5 main types of skin lesions with high sensitivity and specificity, but also subtypes, localizes, and provides margin status to evaluate the proximity of the lesion to non-epidermal margins. The Supervised Training Subset consisted of 2188 random WSIs collected by the PathologyWatch (PW) laboratory between 2013 and 2018, while the Weakly Supervised Subset consisted of 5161 WSIs from daily case specimens. The Validation Set consisted of 250 curated daily case WSIs obtained from the PW tissue archives and included 50 "mimickers". The Testing Set (3821 WSIs) was composed of non-curated daily case specimens collected from July 20, 2021 to August 20, 2021 from PW laboratories. Results: The performance characteristics of our AI model (i.e., Mihm) were assessed retrospectively by running the Testing Set through the Mihm Evaluation Pipeline. Our results show that the sensitivity of Mihm in classifying melanocytic lesions, basal cell carcinoma, and atypical squamous lesions, verruca vulgaris, and seborrheic keratosis was 98.91% (95% CI: 98.27%, 99.55%), 97.24% (95% CI: 96.15%, 98.33%), 95.26% (95% CI: 93.79%, 96.73%), 93.50% (95% CI: 89.14%, 97.86%), and 86.91% (95% CI: 82.13%, 91.69%), respectively. Additionally, our multi-level (i.e., patch-level, ROI-level, and WSI-level) detection algorithm includes a qualitative feature that subtypes lesions, an AI overlay in the front-end digital display that localizes diagnostic ROIs, and reports on margin status by detecting overlap between lesions and non-epidermal tissue margins. Conclusions: Our AI model, developed in collaboration with dermatopathologists, detects 5 skin lesion types with higher sensitivity than previously published AI models, and provides end users with information such as subtyping, localization, and margin status in a front-end digital display. Our end-to-end system has the potential to improve pathology workflows by increasing diagnostic accuracy, expediting the course of patient care, and ultimately improving patient outcomes.

2.
Gastrointest Endosc ; 95(3): 512-518.e1, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34896100

RESUMO

BACKGROUND AND AIMS: A reliable assessment of bowel preparation is important to ensure high-quality colonoscopy. Current bowel preparation scoring systems are limited by interobserver variability. This study aimed to demonstrate objective assessment of bowel preparation adequacy using an artificial intelligence (AI)/convolutional neural network (CNN) algorithm developed from colonoscopy videos. METHODS: Two CNNs were developed using a training set of 73,304 images from 200 colonoscopies. First, a binary CNN was developed and trained to distinguish video frames that were appropriate versus inappropriate for scoring with the Boston Bowel Preparation Scale (BBPS). A second multiclass CNN was developed and trained on 26,950 appropriate frames that were expertly annotated with BBPS segment scores (0-3). We validated the algorithm using 252 10-second video clips that were assigned BBPS segment scores by 2 experts. The algorithm provided mean BBPS scores based on the algorithm (AI-BBPS) by calculating mean BBPS based on each frame's scoring. We maximized the algorithm's performance by choosing a dichotomized AI-BBPS score that closely matched dichotomized BBPS scores (ie, adequate vs inadequate). We tested the mean BBPS score based on the algorithm AI-BBPS against human rating using 30 independent 10-second video clips (test set 1) and 10 full withdrawal colonoscopy videos (test set 2). RESULTS: In the validation set, the algorithm demonstrated an area under the curve of .918 and accuracy of 85.3% for detection of inadequate bowel cleanliness. In test set 1, sensitivity for inadequate bowel preparation was 100% and agreement between raters and AI was 76.7% to 83.3%. In test set 2, sensitivity for inadequate bowel preparation for each segment was 100% and agreement between raters and AI was 68.9% to 89.7%. Agreement between raters alone versus raters and AI were similar (κ = .694 and .649, respectively). CONCLUSIONS: The algorithm assessment of bowel cleanliness as measured with the BBPS showed good performance and agreement with experts including full withdrawal colonoscopies.


Assuntos
Inteligência Artificial , Colonoscopia , Catárticos , Colonoscopia/métodos , Humanos , Redes Neurais de Computação , Variações Dependentes do Observador
3.
Gastroenterology ; 161(3): 1074, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33901494
4.
Gastroenterology ; 160(3): 710-719.e2, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33098883

RESUMO

BACKGROUND AND AIMS: Endoscopic disease activity scoring in ulcerative colitis (UC) is useful in clinical practice but done infrequently. It is required in clinical trials, where it is expensive and slow because human central readers are needed. A machine learning algorithm automating the process could elevate clinical care and facilitate clinical research. Prior work using single-institution databases and endoscopic still images has been promising. METHODS: Seven hundred and ninety-five full-length endoscopy videos were prospectively collected from a phase 2 trial of mirikizumab with 249 patients from 14 countries, totaling 19.5 million image frames. Expert central readers assigned each full-length endoscopy videos 1 endoscopic Mayo score (eMS) and 1 Ulcerative Colitis Endoscopic Index of Severity (UCEIS) score. Initially, video data were cleaned and abnormality features extracted using convolutional neural networks. Subsequently, a recurrent neural network was trained on the features to predict eMS and UCEIS from individual full-length endoscopy videos. RESULTS: The primary metric to assess the performance of the recurrent neural network model was quadratic weighted kappa (QWK) comparing the agreement of the machine-read endoscopy score with the human central reader score. QWK progressively penalizes disagreements that exceed 1 level. The model's agreement metric was excellent, with a QWK of 0.844 (95% confidence interval, 0.787-0.901) for eMS and 0.855 (95% confidence interval, 0.80-0.91) for UCEIS. CONCLUSIONS: We found that a deep learning algorithm can be trained to predict levels of UC severity from full-length endoscopy videos. Our data set was prospectively collected in a multinational clinical trial, videos rather than still images were used, UCEIS and eMS were reported, and machine learning algorithm performance metrics met or exceeded those previously published for UC severity scores.


Assuntos
Anticorpos Monoclonais Humanizados/administração & dosagem , Colite Ulcerativa/diagnóstico , Colonoscopia/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Adolescente , Adulto , Idoso , Anticorpos Monoclonais Humanizados/efeitos adversos , Colite Ulcerativa/tratamento farmacológico , Colo/diagnóstico por imagem , Colo/efeitos dos fármacos , Estudos de Viabilidade , Feminino , Humanos , Mucosa Intestinal/diagnóstico por imagem , Mucosa Intestinal/efeitos dos fármacos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Valor Preditivo dos Testes , Estudos Prospectivos , Índice de Gravidade de Doença , Resultado do Tratamento , Gravação em Vídeo , Adulto Jovem
5.
Gastrointest Endosc ; 91(6): 1264-1271.e1, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31930967

RESUMO

BACKGROUND AND AIMS: The visual detection of early esophageal neoplasia (high-grade dysplasia and T1 cancer) in Barrett's esophagus (BE) with white-light and virtual chromoendoscopy still remains challenging. The aim of this study was to assess whether a convolutional neural artificial intelligence network can aid in the recognition of early esophageal neoplasia in BE. METHODS: Nine hundred sixteen images from 65 patients of histology-proven early esophageal neoplasia in BE containing high-grade dysplasia or T1 cancer were collected. The area of neoplasia was masked using image annotation software. Nine hundred nineteen control images were collected of BE without high-grade dysplasia. A convolutional neural network (CNN) algorithm was pretrained on ImageNet and then fine-tuned with the goal of providing the correct binary classification of "dysplastic" or "nondysplastic." We developed an object detection algorithm that drew localization boxes around regions classified as dysplasia. RESULTS: The CNN analyzed 458 test images (225 dysplasia and 233 nondysplasia) and correctly detected early neoplasia with sensitivity of 96.4%, specificity of 94.2%, and accuracy of 95.4%. With regard to the object detection algorithm for all images in the validation set, the system was able to achieve a mean average precision of .7533 at an intersection over union of .3 CONCLUSIONS: In this pilot study, our artificial intelligence model was able to detect early esophageal neoplasia in BE images with high accuracy. In addition, the object detection algorithm was able to draw a localization box around the areas of dysplasia with high precision and at a speed that allows for real-time implementation.


Assuntos
Esôfago de Barrett , Neoplasias Esofágicas , Redes Neurais de Computação , Esôfago de Barrett/complicações , Esôfago de Barrett/diagnóstico por imagem , Neoplasias Esofágicas/diagnóstico por imagem , Esofagoscopia , Humanos , Projetos Piloto , Gravação em Vídeo
6.
Am J Gastroenterol ; 115(1): 138-144, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31651444

RESUMO

OBJECTIVES: Reliable in situ diagnosis of diminutive (≤5 mm) colorectal polyps could allow for "resect and discard" and "diagnose and leave" strategies, resulting in $1 billion cost savings per year in the United States alone. Current methodologies have failed to consistently meet the Preservation and Incorporation of Valuable endoscopic Innovations (PIVIs) initiative thresholds. Convolutional neural networks (CNNs) have the potential to predict polyp pathology and achieve PIVI thresholds in real time. METHODS: We developed a CNN-based optical pathology (OP) model using Tensorflow and pretrained on ImageNet, capable of operating at 77 frames per second. A total of 6,223 images of unique colorectal polyps of known pathology, location, size, and light source (white light or narrow band imaging [NBI]) underwent 5-fold cross-training (80%) and validation (20%). Separate fresh validation was performed on 634 polyp images. Surveillance intervals were calculated, comparing OP with true pathology. RESULTS: In the original validation set, the negative predictive value for adenomas was 97% among diminutive rectum/rectosigmoid polyps. Results were independent of use of NBI or white light. Surveillance interval concordance comparing OP and true pathology was 93%. In the fresh validation set, the negative predictive value was 97% among diminutive polyps in the rectum and rectosigmoid and surveillance concordance was 94%. DISCUSSION: This study demonstrates the feasibility of in situ diagnosis of colorectal polyps using CNN. Our model exceeds PIVI thresholds for both "resect and discard" and "diagnose and leave" strategies independent of NBI use. Point-of-care adenoma detection rate and surveillance recommendations are potential added benefits.


Assuntos
Adenoma/patologia , Pólipos do Colo/patologia , Neoplasias Colorretais/patologia , Aprendizado Profundo , Vigilância da População , Adenoma/diagnóstico por imagem , Algoritmos , Pólipos do Colo/diagnóstico por imagem , Colonoscopia , Neoplasias Colorretais/diagnóstico por imagem , Previsões/métodos , Humanos , Imagem de Banda Estreita , Sistemas Automatizados de Assistência Junto ao Leito , Valor Preditivo dos Testes , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...