Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Transl Vis Sci Technol ; 13(10): 21, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39392437

RESUMO

Purpose: To identify optical coherence tomography (OCT) biomarkers for macula-off rhegmatogenous retinal detachment (RRD) with artificial intelligence (AI) and to correlate these biomarkers with functional outcomes. Methods: Patients with macula-off RRD treated with single vitrectomy and gas tamponade were included. OCT volumes, taken at 4 to 6 weeks and 1 year postoperative, were uploaded on an AI-derived platform (Discovery OCT Biomarker Detector; RetinAI AG, Bern, Switzerland), measuring different retinal layer thicknesses, including outer nuclear layer (ONL), photoreceptor and retinal pigmented epithelium (PR + RPE), intraretinal fluid (IRF), subretinal fluid, and biomarker probability detection, including hyperreflective foci (HF). A random forest model assessed the predictive factors for final best-corrected visual acuity (BCVA). Results: Fifty-nine patients (42 male, 17 female) were enrolled. Baseline BCVA was 0.5 logarithmic minimum angle of resolution (logMAR) ± 0.1, significantly improving to 0.3 ± 0.1 logMAR at the final visit (P < 0.001). Average thickness analysis indicated a significant increase after the last follow-up visit for ONL (from 95.16 ± 5.47 µm to 100.8 ± 5.27 µm, P = 0.0007) and PR + RPE thicknesses (60.9 ± 2.6 µm to 66.2 ± 1.8 µm, P = 0.0001). Average occurrence rate of HF was 0.12 ± 0.06 at initial visit and 0.08 ± 0.05 at last follow-up visit (P = 0.0093). Random forest model revealed baseline BCVA as the most critical predictor for final BCVA, followed by ONL thickness, HF, and IRF presence at the initial visit. Conclusions: Increased ONL and PR-RPE thickness associate with better outcomes, while HF presence indicates poorer results, with initial BCVA remaining a primary visual predictor. Translational Relevance: The study underscores the role of novel biomarkers like HF in understanding visual function in macula-off RRD.


Assuntos
Inteligência Artificial , Biomarcadores , Descolamento Retiniano , Tomografia de Coerência Óptica , Acuidade Visual , Vitrectomia , Humanos , Descolamento Retiniano/cirurgia , Descolamento Retiniano/metabolismo , Masculino , Feminino , Tomografia de Coerência Óptica/métodos , Pessoa de Meia-Idade , Acuidade Visual/fisiologia , Idoso , Adulto , Macula Lutea/patologia , Macula Lutea/diagnóstico por imagem , Tamponamento Interno
2.
Int J Comput Assist Radiol Surg ; 18(7): 1185-1192, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37184768

RESUMO

PURPOSE: Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. METHOD: We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. RESULTS: Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. CONCLUSION: We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally invasive surgery.


Assuntos
Algoritmos , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Endoscopia/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Endoscópios
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA