Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Asunto de la revista
País de afiliación
Intervalo de año de publicación
1.
J Glaucoma ; 33(4): 246-253, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38245813

RESUMEN

PRCIS: A deep learning model trained on macular OCT imaging studies detected clinically significant functional glaucoma progression and was also able to predict future progression. OBJECTIVE: To use macular optical coherence tomography (OCT) imaging to predict the future and detect concurrent visual field progression, respectively, using deep learning. DESIGN: A retrospective cohort study. SUBJECTS: A pretraining data set was comprised of 7,702,201 B-scan images from 151,389 macular OCT studies. The progression detection task included 3902 macular OCT imaging studies from 1534 eyes of 828 patients with glaucoma, and the progression prediction task included 1346 macular OCT studies from 1205 eyes of 784. METHODS: A novel deep learning method was developed to detect glaucoma progression and predict future progression using macular OCT, based on self-supervised pretraining of a vision transformer (ViT) model on a large, unlabeled data set of OCT images. Glaucoma progression was defined as a mean deviation (MD) rate of change of ≤ -0.5 dB/year over 5 consecutive Humphrey visual field tests, and rapid progression was defined as MD change ≤ -1 dB/year. MAIN OUTCOME MEASURES: Diagnostic performance of the ViT model for prediction of future visual field progression and detection of concurrent visual field progression using area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. RESULTS: The model distinguished stable eyes from progressing eyes, achieving an AUC of 0.90 (95% CI, 0.88-0.91). Rapid progression was detected with an AUC of 0.92 (95% CI, 0.91-0.93). The model also demonstrated high predictive ability for forecasting future glaucoma progression, with an AUC of 0.85 (95% CI 0.83-0.87). Rapid progression was predicted with an AUC of 0.84 (95% CI 0.81-0.86). CONCLUSIONS: A deep learning model detected clinically significant functional glaucoma progression using macular OCT imaging studies and was also able to predict future progression. Early identification of patients undergoing glaucoma progression or at high risk for future progression may aid in clinical decision-making.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Humanos , Campos Visuales , Tomografía de Coherencia Óptica/métodos , Estudios Retrospectivos , Presión Intraocular , Células Ganglionares de la Retina , Glaucoma/diagnóstico , Pruebas del Campo Visual/métodos
2.
Endosc Int Open ; 12(7): E849-E853, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38966321

RESUMEN

Background and study aims Low-quality colonoscopy increases cancer risk but measuring quality remains challenging. We developed an automated, interactive assessment of colonoscopy quality (AI-CQ) using machine learning (ML). Methods Based on quality guidelines, metrics selected for AI development included insertion time (IT), withdrawal time (WT), polyp detection rate (PDR), and polyps per colonoscopy (PPC). Two novel metrics were also developed: HQ-WT (time during withdrawal with clear image) and WT-PT (withdrawal time subtracting polypectomy time). The model was pre-trained using a self-supervised vision transformer on unlabeled colonoscopy images and then finetuned for multi-label classification on another mutually exclusive colonoscopy image dataset. A timeline of video predictions and metric calculations were presented to clinicians in addition to the raw video using a web-based application. The model was externally validated using 50 colonoscopies at a second hospital. Results The AI-CQ accuracy to identify cecal intubation was 88%. IT ( P = 0.99) and WT ( P = 0.99) were highly correlated between manual and AI-CQ measurements with a median difference of 1.5 seconds and 4.5 seconds, respectively. AI-CQ PDR did not significantly differ from manual PDR (47.6% versus 45.5%, P = 0.66). Retroflexion was correctly identified in 95.2% and number of right colon evaluations in 100% of colonoscopies. HQ-WT was 45.9% of, and significantly correlated with ( P = 0.85) WT time. Conclusions An interactive AI assessment of colonoscopy skill can automatically assess quality. We propose that this tool can be utilized to rapidly identify and train providers in need of remediation.

3.
Radiol Artif Intell ; 6(3): e230079, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38477661

RESUMEN

Purpose To evaluate the impact of an artificial intelligence (AI) assistant for lung cancer screening on multinational clinical workflows. Materials and Methods An AI assistant for lung cancer screening was evaluated on two retrospective randomized multireader multicase studies where 627 (141 cancer-positive cases) low-dose chest CT cases were each read twice (with and without AI assistance) by experienced thoracic radiologists (six U.S.-based or six Japan-based radiologists), resulting in a total of 7524 interpretations. Positive cases were defined as those within 2 years before a pathology-confirmed lung cancer diagnosis. Negative cases were defined as those without any subsequent cancer diagnosis for at least 2 years and were enriched for a spectrum of diverse nodules. The studies measured the readers' level of suspicion (on a 0-100 scale), country-specific screening system scoring categories, and management recommendations. Evaluation metrics included the area under the receiver operating characteristic curve (AUC) for level of suspicion and sensitivity and specificity of recall recommendations. Results With AI assistance, the radiologists' AUC increased by 0.023 (0.70 to 0.72; P = .02) for the U.S. study and by 0.023 (0.93 to 0.96; P = .18) for the Japan study. Scoring system specificity for actionable findings increased 5.5% (57% to 63%; P < .001) for the U.S. study and 6.7% (23% to 30%; P < .001) for the Japan study. There was no evidence of a difference in corresponding sensitivity between unassisted and AI-assisted reads for the U.S. (67.3% to 67.5%; P = .88) and Japan (98% to 100%; P > .99) studies. Corresponding stand-alone AI AUC system performance was 0.75 (95% CI: 0.70, 0.81) and 0.88 (95% CI: 0.78, 0.97) for the U.S.- and Japan-based datasets, respectively. Conclusion The concurrent AI interface improved lung cancer screening specificity in both U.S.- and Japan-based reader studies, meriting further study in additional international screening environments. Keywords: Assistive Artificial Intelligence, Lung Cancer Screening, CT Supplemental material is available for this article. Published under a CC BY 4.0 license.


Asunto(s)
Inteligencia Artificial , Detección Precoz del Cáncer , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Pulmonares/diagnóstico , Neoplasias Pulmonares/epidemiología , Japón , Estados Unidos/epidemiología , Estudios Retrospectivos , Detección Precoz del Cáncer/métodos , Femenino , Masculino , Persona de Mediana Edad , Anciano , Sensibilidad y Especificidad , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA