Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Gastrointest Endosc ; 97(2): 268-278.e1, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36007584

RESUMO

BACKGROUND AND AIMS: Accurately diagnosing malignant biliary strictures (MBSs) as benign or malignant remains challenging. It has been suggested that direct visualization and interpretation of cholangioscopy images provide greater accuracy for stricture classification than current sampling techniques (ie, brush cytology and forceps biopsy sampling) using ERCP. We aimed to develop a convolutional neural network (CNN) model capable of accurate stricture classification and real-time evaluation based solely on cholangioscopy image analysis. METHODS: Consecutive patients with cholangioscopy examinations from 2012 to 2021 were reviewed. A CNN was developed and tested using cholangioscopy images with direct expert annotations. The CNN was then applied to a multicenter, reserved test set of cholangioscopy videos. CNN performance was then directly compared with that of ERCP sampling techniques. Occlusion block heatmap analyses were used to evaluate and rank cholangioscopy features associated with MBSs. RESULTS: One hundred fifty-four patients with available cholangioscopy examinations were included in the study. The final image database comprised 2,388,439 still images. The CNN demonstrated good performance when tasked with mimicking expert annotations of high-quality malignant images (area under the receiver-operating characteristic curve, .941). Overall accuracy of CNN-based video analysis (.906) was significantly greater than that of brush cytology (.625, P = .04) or forceps biopsy sampling (.609, P = .03). Occlusion block heatmap analysis demonstrated that the most frequent image feature for an MBS was the presence of frond-like mucosa/papillary projections. CONCLUSIONS: This study demonstrates that a CNN developed using cholangioscopy data alone has greater accuracy for biliary stricture classification than traditional ERCP-based sampling techniques.


Assuntos
Colestase , Aprendizado Profundo , Humanos , Constrição Patológica/diagnóstico , Inteligência Artificial , Estudos Prospectivos , Colestase/diagnóstico por imagem , Colestase/etiologia
2.
Gut ; 70(7): 1335-1344, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33028668

RESUMO

OBJECTIVE: The diagnosis of autoimmune pancreatitis (AIP) is challenging. Sonographic and cross-sectional imaging findings of AIP closely mimic pancreatic ductal adenocarcinoma (PDAC) and techniques for tissue sampling of AIP are suboptimal. These limitations often result in delayed or failed diagnosis, which negatively impact patient management and outcomes. This study aimed to create an endoscopic ultrasound (EUS)-based convolutional neural network (CNN) model trained to differentiate AIP from PDAC, chronic pancreatitis (CP) and normal pancreas (NP), with sufficient performance to analyse EUS video in real time. DESIGN: A database of still image and video data obtained from EUS examinations of cases of AIP, PDAC, CP and NP was used to develop a CNN. Occlusion heatmap analysis was used to identify sonographic features the CNN valued when differentiating AIP from PDAC. RESULTS: From 583 patients (146 AIP, 292 PDAC, 72 CP and 73 NP), a total of 1 174 461 unique EUS images were extracted. For video data, the CNN processed 955 EUS frames per second and was: 99% sensitive, 98% specific for distinguishing AIP from NP; 94% sensitive, 71% specific for distinguishing AIP from CP; 90% sensitive, 93% specific for distinguishing AIP from PDAC; and 90% sensitive, 85% specific for distinguishing AIP from all studied conditions (ie, PDAC, CP and NP). CONCLUSION: The developed EUS-CNN model accurately differentiated AIP from PDAC and benign pancreatic conditions, thereby offering the capability of earlier and more accurate diagnosis. Use of this model offers the potential for more timely and appropriate patient care and improved outcome.


Assuntos
Pancreatite Autoimune/diagnóstico por imagem , Carcinoma Ductal Pancreático/diagnóstico por imagem , Endossonografia , Interpretação de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Neoplasias Pancreáticas/diagnóstico por imagem , Área Sob a Curva , Diagnóstico Diferencial , Humanos , Aprendizado de Máquina , Variações Dependentes do Observador , Pâncreas/diagnóstico por imagem , Curva ROC
3.
Gastrointest Endosc ; 93(5): 1121-1130.e1, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-32861752

RESUMO

BACKGROUND AND AIMS: Detection and characterization of focal liver lesions (FLLs) is key for optimizing treatment for patients who may have a primary hepatic cancer or metastatic disease to the liver. This is the first study to develop an EUS-based convolutional neural network (CNN) model for the purpose of identifying and classifying FLLs. METHODS: A prospective EUS database comprising cases of FLLs visualized and sampled via EUS was reviewed. Relevant still images and videos of liver parenchyma and FLLs were extracted. Patient data were then randomly distributed for the purpose of CNN model training and testing. Once a final model was created, occlusion heatmap analysis was performed to assess the ability of the EUS-CNN model to autonomously identify FLLs. The performance of the EUS-CNN for differentiating benign and malignant FLLs was also analyzed. RESULTS: A total of 210,685 unique EUS images from 256 patients were used to train, validate, and test the CNN model. Occlusion heatmap analyses demonstrated that the EUS-CNN model was successful in autonomously locating FLLs in 92.0% of EUS video assets. When evaluating any random still image extracted from videos or physician-captured images, the CNN model was 90% sensitive and 71% specific (area under the receiver operating characteristic [AUROC], 0.861) for classifying malignant FLLs. When evaluating full-length video assets, the EUS-CNN model was 100% sensitive and 80% specific (AUROC, 0.904) for classifying malignant FLLs. CONCLUSIONS: This study demonstrated the capability of an EUS-CNN model to autonomously identify FLLs and to accurately classify them as either malignant or benign lesions.


Assuntos
Inteligência Artificial , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Redes Neurais de Computação , Estudos Prospectivos , Sensibilidade e Especificidade
5.
Cancer Cytopathol ; 2024 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-39207803

RESUMO

BACKGROUND: The authors previously developed an artificial intelligence (AI) to assist cytologists in the evaluation of digital whole-slide images (WSIs) generated from bile duct brushing specimens. The aim of this trial was to assess the efficiency and accuracy of cytologists using a novel application with this AI tool. METHODS: Consecutive bile duct brushing WSIs from indeterminate strictures were obtained. A multidisciplinary panel reviewed all relevant information and provided a central interpretation for each WSI as being "positive," "negative," or "indeterminate." The WSIs were then uploaded to the AI application. The AI scored each WSI as positive or negative for malignancy (i.e., computer-aided diagnosis [CADx]). For each WSI, the AI prioritized cytologic tiles by the likelihood that malignant material was present in the tile. Via the AI, blinded cytologists reviewed all WSIs and provided interpretations (i.e., computer-aided detection [CADe]). The diagnostic accuracies of the WSI evaluation via CADx, CADe, and the original clinical cytologic interpretation (official cytologic interpretation [OCI]) were compared. RESULTS: Of the 84 WSIs, 15 were positive, 42 were negative, and 27 were indeterminate after central review. The WSIs generated on average 141,950 tiles each. Cytologists using the AI evaluated 10.5 tiles per WSI before making an interpretation. Additionally, cytologists required an average of 84.1 s of total WSI evaluation. WSI interpretation accuracies for CADx (0.754; 95% CI, 0.622-0.859), CADe (0.807; 95% CI, 0.750-0.856), and OCI (0.807; 95% CI, 0.671-0.900) were similar. CONCLUSIONS: This trial demonstrates that an AI application allows cytologists to perform a triaged review of WSIs while maintaining accuracy.

6.
Gastrointest Endosc Clin N Am ; 31(2): 387-397, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33743933

RESUMO

Artificial intelligence (AI) research for medical applications has expanded quickly. Advancements in computer processing now allow for the development of complex neural network architectures (eg, convolutional neural networks) that are capable of extracting and learning complex features from massive data sets, including large image databases. Gastroenterology and endoscopy are well suited for AI research. Video capsule endoscopy is an ideal platform for AI model research given the large amount of data produced by each capsule examination and the annotated databases that are already available. Studies have demonstrated high performance for applications of capsule-based AI models developed for various pathologic conditions.


Assuntos
Endoscopia por Cápsula , Gastroenterologia , Inteligência Artificial , Humanos , Pesquisa
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA