Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 72
Filtrar
1.
Retina ; 44(2): 316-323, 2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-37883530

RESUMO

PURPOSE: To identify optical coherence tomography (OCT) features to predict the course of central serous chorioretinopathy (CSC) with an artificial intelligence-based program. METHODS: Multicenter, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were enrolled. Baseline OCTs were examined by an artificial intelligence-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). Through this platform, automated retinal layer thicknesses and volumes, including intaretinal and subretinal fluid, and pigment epithelium detachment were measured. Baseline OCT features were compared between acute CSC and chronic CSC patients. RESULTS: One hundred and sixty eyes of 144 patients with CSC were enrolled, of which 100 had chronic CSC and 60 acute CSC. Retinal layer analysis of baseline OCT scans showed that the inner nuclear layer, the outer nuclear layer, and the photoreceptor-retinal pigmented epithelium complex were significantly thicker at baseline in eyes with acute CSC in comparison with those with chronic CSC ( P < 0.001). Similarly, choriocapillaris and choroidal stroma and retinal thickness (RT) were thicker in acute CSC than chronic CSC eyes ( P = 0.001). Volume analysis revealed average greater subretinal fluid volumes in the acute CSC group in comparison with chronic CSC ( P = 0.041). CONCLUSION: Optical coherence tomography features may be helpful to predict the clinical course of CSC. The baseline presence of an increased thickness in the outer retinal layers, choriocapillaris and choroidal stroma, and subretinal fluid volume seems to be associated with acute course of the disease.


Assuntos
Coriorretinopatia Serosa Central , Humanos , Coriorretinopatia Serosa Central/diagnóstico , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos , Inteligência Artificial , Retina , Angiofluoresceinografia
2.
Ophthalmologica ; 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38555632

RESUMO

INTRODUCTION: The aim of this study is to investigate the role of an artificial intelligence (AI)-developed OCT program to predict the clinical course of central serous chorioretinopathy (CSC ) based on baseline pigment epithelium detachment (PED) features. METHODS: Single-center, observational study with a retrospective design. Treatment-naïve patients with acute CSC and chronic CSC were recruited and OCTs were analyzed by an AI-developed platform (Discovery OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland), providing automatic detection and volumetric quantification of PEDs. Flat irregular PED presence was annotated manually and afterwards measured by the AI program automatically. RESULTS: 115 eyes of 101 patients with CSC were included, of which 70 were diagnosed with chronic CSC and 45 with acute CSC. It was found that patients with baseline presence of foveal flat PEDs and multiple flat foveal and extrafoveal PEDs had a higher chance of developing chronic form. AI-based volumetric analysis revealed no significant differences between the groups. CONCLUSIONS: While more evidence is needed to confirm the effectiveness of AI-based PED quantitative analysis, this study highlights the significance of identifying flat irregular PEDs at the earliest stage possible in patients with CSC, to optimize patient management and long-term visual outcomes.

3.
Mol Syst Biol ; 17(4): e10026, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33835701

RESUMO

Current studies of cell signaling dynamics that use live cell fluorescent biosensors routinely yield thousands of single-cell, heterogeneous, multi-dimensional trajectories. Typically, the extraction of relevant information from time series data relies on predefined, human-interpretable features. Without a priori knowledge of the system, the predefined features may fail to cover the entire spectrum of dynamics. Here we present CODEX, a data-driven approach based on convolutional neural networks (CNNs) that identifies patterns in time series. It does not require a priori information about the biological system and the insights into the data are built through explanations of the CNNs' predictions. CODEX provides several views of the data: visualization of all the single-cell trajectories in a low-dimensional space, identification of prototypic trajectories, and extraction of distinctive motifs. We demonstrate how CODEX can provide new insights into ERK and Akt signaling in response to various growth factors, and we recapitulate findings in p53 and TGFß-SMAD2 signaling.


Assuntos
Algoritmos , Redes Neurais de Computação , Transdução de Sinais , Animais , Linhagem Celular , Bases de Dados como Assunto , Relação Dose-Resposta à Radiação , Drosophila/fisiologia , Drosophila/efeitos da radiação , MAP Quinases Reguladas por Sinal Extracelular/metabolismo , Corantes Fluorescentes/metabolismo , Humanos , Peptídeos e Proteínas de Sinalização Intercelular/metabolismo , Luz , Aprendizado de Máquina , Movimento/efeitos da radiação , Proteínas Proto-Oncogênicas c-akt/metabolismo , Radiação Ionizante , Fator de Crescimento Transformador beta/metabolismo , Proteína Supressora de Tumor p53/metabolismo
4.
Eur J Nucl Med Mol Imaging ; 49(9): 3061-3072, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35226120

RESUMO

PURPOSE: Alzheimer's disease (AD) studies revealed that abnormal deposition of tau spreads in a specific spatial pattern, namely Braak stage. However, Braak staging is based on post mortem brains, each of which represents the cross section of the tau trajectory in disease progression, and numerous studies were reported that do not conform to that model. This study thus aimed to identify the tau trajectory and quantify the tau progression in a data-driven approach with the continuous latent space learned by variational autoencoder (VAE). METHODS: A total of 1080 [18F]Flortaucipir brain positron emission tomography (PET) images were collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. VAE was built to compress the hidden features from tau images in latent space. Hierarchical agglomerative clustering and minimum spanning tree (MST) were applied to organize the features and calibrate them to the tau progression, thus deriving pseudo-time. The image-level tau trajectory was inferred by continuously sampling across the calibrated latent features. We assessed the pseudo-time with regard to tau standardized uptake value ratio (SUVr) in AD-vulnerable regions, amyloid deposit, glucose metabolism, cognitive scores, and clinical diagnosis. RESULTS: We identified four clusters that plausibly capture certain stages of AD and organized the clusters in the latent space. The inferred tau trajectory agreed with the Braak staging. According to the derived pseudo-time, tau first deposits in the parahippocampal and amygdala, and then spreads to the fusiform, inferior temporal lobe, and posterior cingulate. Prior to the regional tau deposition, amyloid accumulates first. CONCLUSION: The spatiotemporal trajectory of tau progression inferred in this study was consistent with Braak staging. The profile of other biomarkers in disease progression agreed well with previous findings. We addressed that this approach additionally has the potential to quantify tau progression as a continuous variable by taking a whole-brain tau image into account.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/metabolismo , Encéfalo/metabolismo , Carbolinas , Disfunção Cognitiva/metabolismo , Progressão da Doença , Humanos , Tomografia por Emissão de Pósitrons/métodos , Proteínas tau/metabolismo
5.
Eur J Nucl Med Mol Imaging ; 49(6): 1843-1856, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-34950968

RESUMO

PURPOSE: A critical bottleneck for the credibility of artificial intelligence (AI) is replicating the results in the diversity of clinical practice. We aimed to develop an AI that can be independently applied to recover high-quality imaging from low-dose scans on different scanners and tracers. METHODS: Brain [18F]FDG PET imaging of 237 patients scanned with one scanner was used for the development of AI technology. The developed algorithm was then tested on [18F]FDG PET images of 45 patients scanned with three different scanners, [18F]FET PET images of 18 patients scanned with two different scanners, as well as [18F]Florbetapir images of 10 patients. A conditional generative adversarial network (GAN) was customized for cross-scanner and cross-tracer optimization. Three nuclear medicine physicians independently assessed the utility of the results in a clinical setting. RESULTS: The improvement achieved by AI recovery significantly correlated with the baseline image quality indicated by structural similarity index measurement (SSIM) (r = -0.71, p < 0.05) and normalized dose acquisition (r = -0.60, p < 0.05). Our cross-scanner and cross-tracer AI methodology showed utility based on both physical and clinical image assessment (p < 0.05). CONCLUSION: The deep learning development for extensible application on unknown scanners and tracers may improve the trustworthiness and clinical acceptability of AI-based dose reduction.


Assuntos
Aprendizado Profundo , Fluordesoxiglucose F18 , Inteligência Artificial , Encéfalo/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Tomografia por Emissão de Pósitrons/métodos
6.
Ophthalmologica ; 245(6): 516-527, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36215958

RESUMO

INTRODUCTION: In this retrospective cohort study, we wanted to evaluate the performance and analyze the insights of an artificial intelligence (AI) algorithm in detecting retinal fluid in spectral-domain OCT volume scans from a large cohort of patients with neovascular age-related macular degeneration (AMD) and diabetic macular edema (DME). METHODS: A total of 3,981 OCT volumes from 374 patients with AMD and 11,501 OCT volumes from 811 patients with DME were acquired with Heidelberg-Spectralis OCT device (Heidelberg Engineering Inc., Heidelberg, Germany) between 2013 and 2021. Each OCT volume was annotated for the presence or absence of intraretinal fluid (IRF) and subretinal fluid (SRF) by masked reading center graders (ground truth). The performance of an already published AI algorithm to detect IRF and SRF separately, and a combined fluid detector (IRF and/or SRF) of the same OCT volumes was evaluated. An analysis of the sources of disagreement between annotation and prediction and their relationship to central retinal thickness was performed. We computed the mean areas under the curves (AUC) and under the precision-recall curves (AP), accuracy, sensitivity, specificity, and precision. RESULTS: The AUC for IRF was 0.92 and 0.98, for SRF 0.98 and 0.99, in the AMD and DME cohort, respectively. The AP for IRF was 0.89 and 1.00, for SRF 0.97 and 0.93, in the AMD and DME cohort, respectively. The accuracy, specificity, and sensitivity for IRF were 0.87, 0.88, 0.84, and 0.93, 0.95, 0.93, and for SRF 0.93, 0.93, 0.93, and 0.95, 0.95, 0.95 in the AMD and DME cohort, respectively. For detecting any fluid, the AUC was 0.95 and 0.98, and the accuracy, specificity, and sensitivity were 0.89, 0.93, and 0.90 and 0.95, 0.88, and 0.93, in the AMD and DME cohort, respectively. False positives were present when retinal shadow artifacts and strong retinal deformation were present. False negatives were due to small hyporeflective areas in combination with poor image quality. The combined detector correctly predicted more OCT volumes than the single detectors for IRF and SRF, 89.0% versus 81.6% in the AMD and 93.1% versus 88.6% in the DME cohort. DISCUSSION/CONCLUSION: The AI-based fluid detector achieves high performance for retinal fluid detection in a very large dataset dedicated to AMD and DME. Combining single detectors provides better fluid detection accuracy than considering the single detectors separately. The observed independence of the single detectors ensures that the detectors learned features particular to IRF and SRF.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Degeneração Macular , Edema Macular , Degeneração Macular Exsudativa , Humanos , Edema Macular/diagnóstico , Retinopatia Diabética/diagnóstico , Tomografia de Coerência Óptica/métodos , Líquido Sub-Retiniano , Estudos Retrospectivos , Inteligência Artificial , Degeneração Macular/diagnóstico , Inibidores da Angiogênese
7.
Biomed Eng Online ; 13: 74, 2014 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-25012210

RESUMO

BACKGROUND: Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. METHODS: Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. RESULTS: Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. CONCLUSIONS: Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.


Assuntos
Caenorhabditis elegans/citologia , Dendritos , Processamento de Imagem Assistida por Computador/métodos , Microscopia Confocal/métodos , Algoritmos , Animais , Mecanotransdução Celular
8.
Comput Struct Biotechnol J ; 24: 334-342, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38690550

RESUMO

Malaria, a significant global health challenge, is caused by Plasmodium parasites. The Plasmodium liver stage plays a pivotal role in the establishment of the infection. This study focuses on the liver stage development of the model organism Plasmodium berghei, employing fluorescent microscopy imaging and convolutional neural networks (CNNs) for analysis. Convolutional neural networks have been recently proposed as a viable option for tasks such as malaria detection, prediction of host-pathogen interactions, or drug discovery. Our research aimed to predict the transition of Plasmodium-infected liver cells to the merozoite stage, a key development phase, 15 hours in advance. We collected and analyzed hourly imaging data over a span of at least 38 hours from 400 sequences, encompassing 502 parasites. Our method was compared to human annotations to validate its efficacy. Performance metrics, including the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity, were evaluated on an independent test dataset. The outcomes revealed an AUC of 0.873, a sensitivity of 84.6%, and a specificity of 83.3%, underscoring the potential of our CNN-based framework to predict liver stage development of P. berghei. These findings not only demonstrate the feasibility of our methodology but also could potentially contribute to the broader understanding of parasite biology.

9.
Transl Vis Sci Technol ; 13(6): 10, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38884547

RESUMO

Purpose: To explore the structural-functional loss relationship from optic-nerve-head- and macula-centred spectral-domain (SD) Optical Coherence Tomography (OCT) images in the full spectrum of glaucoma patients using deep-learning methods. Methods: A cohort comprising 5238 unique eyes classified as suspects or diagnosed with glaucoma was considered. All patients underwent ophthalmologic examination consisting of standard automated perimetry (SAP), macular OCT, and peri-papillary OCT on the same day. Deep learning models were trained to estimate G-pattern visual field (VF) mean deviation (MD) and cluster MD using retinal thickness maps from seven layers: retinal nerve fiber layer (RNFL), ganglion cell layer and inner plexiform layer (GCL + IPL), inner nuclear layer and outer plexiform layer (INL + OPL), outer nuclear layer (ONL), photoreceptors and retinal pigmented epithelium (PR + RPE), choriocapillaris and choroidal stroma (CC + CS), total retinal thickness (RT). Results: The best performance on MD prediction is achieved by RNFL, GCL + IPL and RT layers, with R2 scores of 0.37, 0.33, and 0.31, respectively. Combining macular and peri-papillary scans outperforms single modality prediction, achieving an R2 value of 0.48. Cluster MD predictions show promising results, notably in central clusters, reaching an R2 of 0.56. Conclusions: The combination of multiple modalities, such as optic-nerve-head circular B-scans and retinal thickness maps from macular SD-OCT images, improves the performance of MD and cluster MD prediction. Our proposed model demonstrates the highest level of accuracy in predicting MD in the early-to-mid stages of glaucoma. Translational Relevance: Objective measures recorded with SD-OCT can optimize the number of visual field tests and improve individualized glaucoma care by adjusting VF testing frequency based on deep-learning estimates of functional damage.


Assuntos
Aprendizado Profundo , Macula Lutea , Tomografia de Coerência Óptica , Campos Visuais , Tomografia de Coerência Óptica/métodos , Humanos , Feminino , Pessoa de Meia-Idade , Masculino , Campos Visuais/fisiologia , Macula Lutea/diagnóstico por imagem , Macula Lutea/patologia , Prognóstico , Idoso , Células Ganglionares da Retina/patologia , Glaucoma/diagnóstico por imagem , Glaucoma/patologia , Fibras Nervosas/patologia , Testes de Campo Visual/métodos , Disco Óptico/diagnóstico por imagem , Disco Óptico/patologia
10.
Int J Comput Assist Radiol Surg ; 19(5): 851-859, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38189905

RESUMO

PURPOSE: Semantic segmentation plays a pivotal role in many applications related to medical image and video analysis. However, designing a neural network architecture for medical image and surgical video segmentation is challenging due to the diverse features of relevant classes, including heterogeneity, deformability, transparency, blunt boundaries, and various distortions. We propose a network architecture, DeepPyramid+, which addresses diverse challenges encountered in medical image and surgical video segmentation. METHODS: The proposed DeepPyramid+ incorporates two major modules, namely "Pyramid View Fusion" (PVF) and "Deformable Pyramid Reception" (DPR), to address the outlined challenges. PVF replicates a deduction process within the neural network, aligning with the human visual system, thereby enhancing the representation of relative information at each pixel position. Complementarily, DPR introduces shape- and scale-adaptive feature extraction techniques using dilated deformable convolutions, enhancing accuracy and robustness in handling heterogeneous classes and deformable shapes. RESULTS: Extensive experiments conducted on diverse datasets, including endometriosis videos, MRI images, OCT scans, and cataract and laparoscopy videos, demonstrate the effectiveness of DeepPyramid+ in handling various challenges such as shape and scale variation, reflection, and blur degradation. DeepPyramid+ demonstrates significant improvements in segmentation performance, achieving up to a 3.65% increase in Dice coefficient for intra-domain segmentation and up to a 17% increase in Dice coefficient for cross-domain segmentation. CONCLUSIONS: DeepPyramid+ consistently outperforms state-of-the-art networks across diverse modalities considering different backbone networks, showcasing its versatility. Accordingly, DeepPyramid+ emerges as a robust and effective solution, successfully overcoming the intricate challenges associated with relevant content segmentation in medical images and surgical videos. Its consistent performance and adaptability indicate its potential to enhance precision in computerized medical image and surgical video analysis applications.


Assuntos
Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador/métodos , Gravação em Vídeo , Imageamento por Ressonância Magnética/métodos , Tomografia de Coerência Óptica/métodos , Feminino , Laparoscopia/métodos , Algoritmos
11.
IEEE Trans Med Imaging ; PP2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38640052

RESUMO

In Ultrasound Localization Microscopy (ULM), achieving high-resolution images relies on the precise localization of contrast agent particles across a series of beamformed frames. However, our study uncovers an enormous potential: The process of delay-and-sum beamforming leads to an irreversible reduction of Radio-Frequency (RF) channel data, while its implications for localization remain largely unexplored. The rich contextual information embedded within RF wavefronts, including their hyperbolic shape and phase, offers great promise for guiding Deep Neural Networks (DNNs) in challenging localization scenarios. To fully exploit this data, we propose to directly localize scatterers in RF channel data. Our approach involves a custom super-resolution DNN using learned feature channel shuffling, non-maximum suppression, and a semi-global convolutional block for reliable and accurate wavefront localization. Additionally, we introduce a geometric point transformation that facilitates seamless mapping to the B-mode coordinate space. To understand the impact of beamforming on ULM, we validate the effectiveness of our method by conducting an extensive comparison with State-Of-The-Art (SOTA) techniques. We present the inaugural in vivo results from a wavefront-localizing DNN, highlighting its real-world practicality. Our findings show that RF-ULM bridges the domain shift between synthetic and real datasets, offering a considerable advantage in terms of precision and complexity. To enable the broader research community to benefit from our findings, our code and the associated SOTA methods are made available at https://github.com/hahnec/rf-ulm.

12.
Transl Vis Sci Technol ; 13(4): 1, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38564203

RESUMO

Purpose: The purpose of this study was to develop a deep learning algorithm, to detect retinal breaks and retinal detachments on ultra-widefield fundus (UWF) optos images using artificial intelligence (AI). Methods: Optomap UWF images of the database were annotated to four groups by two retina specialists: (1) retinal breaks without detachment, (2) retinal breaks with retinal detachment, (3) retinal detachment without visible retinal breaks, and (4) a combination of groups 1 to 3. The fundus image data set was split into a training set and an independent test set following an 80% to 20% ratio. Image preprocessing methods were applied. An EfficientNet classification model was trained with the training set and evaluated with the test set. Results: A total of 2489 UWF images were included into the dataset, resulting in a training set size of 2008 UWF images and a test set size of 481 images. The classification models achieved an area under the receiver operating characteristic curve (AUC) on the testing set of 0.975 regarding lesion detection, an AUC of 0.972 for retinal detachment and an AUC of 0.913 for retinal breaks. Conclusions: A deep learning system to detect retinal breaks and retinal detachment using UWF images is feasible and has a good specificity. This is relevant for clinical routine as there can be a high rate of missed breaks in clinics. Future clinical studies will be necessary to evaluate the cost-effectiveness of applying such an algorithm as an automated auxiliary tool in a large practices or tertiary referral centers. Translational Relevance: This study demonstrates the relevance of applying AI in diagnosing peripheral retinal breaks in clinical routine in UWF fundus images.


Assuntos
Aprendizado Profundo , Descolamento Retiniano , Perfurações Retinianas , Humanos , Descolamento Retiniano/diagnóstico , Inteligência Artificial , Fotografação
13.
Cell Calcium ; 121: 102893, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38701707

RESUMO

The release of Ca2+ ions from intracellular stores plays a crucial role in many cellular processes, acting as a secondary messenger in various cell types, including cardiomyocytes, smooth muscle cells, hepatocytes, and many others. Detecting and classifying associated local Ca2+ release events is particularly important, as these events provide insight into the mechanisms, interplay, and interdependencies of local Ca2+release events underlying global intracellular Ca2+signaling. However, time-consuming and labor-intensive procedures often complicate analysis, especially with low signal-to-noise ratio imaging data. Here, we present an innovative deep learning-based approach for automatically detecting and classifying local Ca2+ release events. This approach is exemplified with rapid full-frame confocal imaging data recorded in isolated cardiomyocytes. To demonstrate the robustness and accuracy of our method, we first use conventional evaluation methods by comparing the intersection between manual annotations and the segmentation of Ca2+ release events provided by the deep learning method, as well as the annotated and recognized instances of individual events. In addition to these methods, we compare the performance of the proposed model with the annotation of six experts in the field. Our model can recognize more than 75 % of the annotated Ca2+ release events and correctly classify more than 75 %. A key result was that there were no significant differences between the annotations produced by human experts and the result of the proposed deep learning model. We conclude that the proposed approach is a robust and time-saving alternative to conventional full-frame confocal imaging analysis of local intracellular Ca2+ events.


Assuntos
Sinalização do Cálcio , Cálcio , Aprendizado Profundo , Microscopia Confocal , Miócitos Cardíacos , Cálcio/metabolismo , Microscopia Confocal/métodos , Animais , Miócitos Cardíacos/metabolismo , Processamento de Imagem Assistida por Computador/métodos
14.
Sci Data ; 11(1): 373, 2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38609405

RESUMO

In recent years, the landscape of computer-assisted interventions and post-operative surgical video analysis has been dramatically reshaped by deep-learning techniques, resulting in significant advancements in surgeons' skills, operation room management, and overall surgical outcomes. However, the progression of deep-learning-powered surgical technologies is profoundly reliant on large-scale datasets and annotations. In particular, surgical scene understanding and phase recognition stand as pivotal pillars within the realm of computer-assisted surgery and post-operative assessment of cataract surgery videos. In this context, we present the largest cataract surgery video dataset that addresses diverse requisites for constructing computerized surgical workflow analysis and detecting post-operative irregularities in cataract surgery. We validate the quality of annotations by benchmarking the performance of several state-of-the-art neural network architectures for phase recognition and surgical scene segmentation. Besides, we initiate the research on domain adaptation for instrument segmentation in cataract surgery by evaluating cross-domain instrument segmentation performance in cataract surgery videos. The dataset and annotations are publicly available in Synapse.


Assuntos
Extração de Catarata , Catarata , Aprendizado Profundo , Gravação em Vídeo , Humanos , Benchmarking , Redes Neurais de Computação , Extração de Catarata/métodos
15.
Int J Retina Vitreous ; 10(1): 42, 2024 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-38822446

RESUMO

AIM: To adopt a novel artificial intelligence (AI) optical coherence tomography (OCT)-based program to identify the presence of biomarkers associated with central serous chorioretinopathy (CSC) and whether these can differentiate between acute and chronic central serous chorioretinopathy (aCSC and cCSC). METHODS: Multicenter, observational study with a retrospective design enrolling treatment-naïve patients with aCSC and cCSC. The diagnosis of aCSC and cCSC was established with multimodal imaging and for the current study subsequent follow-up visits were also considered. Baseline OCTs were analyzed by an AI-based platform (Discovery® OCT Fluid and Biomarker Detector, RetinAI AG, Switzerland). This software allows to detect several different biomarkers in each single OCT scan, including subretinal fluid (SRF), intraretinal fluid (IRF), hyperreflective foci (HF) and flat irregular pigment epithelium detachment (FIPED). The presence of SRF was considered as a necessary inclusion criterion for performing biomarker analysis and OCT slabs without SRF presence were excluded from the analysis. RESULTS: Overall, 160 eyes of 144 patients with CSC were enrolled, out of which 100 (62.5%) eyes were diagnosed with cCSC and 60 eyes (34.5%) with aCSC. In the OCT slabs showing presence of SRF the presence of biomarkers was found to be clinically relevant (> 50%) for HF and FIPED in aCSC and cCSC. HF had an average percentage of 81% (± 20) in the cCSC group and 81% (± 15) in the aCSC group (p = 0.4295) and FIPED had a mean percentage of 88% (± 18) in cCSC vs. 89% (± 15) in the aCSC (p = 0.3197). CONCLUSION: We demonstrate that HF and FIPED are OCT biomarkers positively associated with CSC when present at baseline. While both HF and FIPED biomarkers could aid in CSC diagnosis, they could not distinguish between aCSC and cCSC at the first visit. AI-assisted biomarker detection shows promise for reducing invasive imaging needs, but further validation through longitudinal studies is needed.

16.
Med Image Anal ; 87: 102822, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37182321

RESUMO

Recent advances in machine learning models have greatly increased the performance of automated methods in medical image analysis. However, the internal functioning of such models is largely hidden, which hinders their integration in clinical practice. Explainability and trust are viewed as important aspects of modern methods, for the latter's widespread use in clinical communities. As such, validation of machine learning models represents an important aspect and yet, most methods are only validated in a limited way. In this work, we focus on providing a richer and more appropriate validation approach for highly powerful Visual Question Answering (VQA) algorithms. To better understand the performance of these methods, which answer arbitrary questions related to images, this work focuses on an automatic visual Turing test (VTT). That is, we propose an automatic adaptive questioning method, that aims to expose the reasoning behavior of a VQA algorithm. Specifically, we introduce a reinforcement learning (RL) agent that observes the history of previously asked questions, and uses it to select the next question to pose. We demonstrate our approach in the context of evaluating algorithms that automatically answer questions related to diabetic macular edema (DME) grading. The experiments show that such an agent has similar behavior to a clinician, whereby asking questions that are relevant to key clinical concepts.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Humanos , Retinopatia Diabética/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Algoritmos , Aprendizado de Máquina
17.
Int J Comput Assist Radiol Surg ; 18(6): 1085-1091, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37133678

RESUMO

PURPOSE: A fundamental problem in designing safe machine learning systems is identifying when samples presented to a deployed model differ from those observed at training time. Detecting so-called out-of-distribution (OoD) samples is crucial in safety-critical applications such as robotically guided retinal microsurgery, where distances between the instrument and the retina are derived from sequences of 1D images that are acquired by an instrument-integrated optical coherence tomography (iiOCT) probe. METHODS: This work investigates the feasibility of using an OoD detector to identify when images from the iiOCT probe are inappropriate for subsequent machine learning-based distance estimation. We show how a simple OoD detector based on the Mahalanobis distance can successfully reject corrupted samples coming from real-world ex vivo porcine eyes. RESULTS: Our results demonstrate that the proposed approach can successfully detect OoD samples and help maintain the performance of the downstream task within reasonable levels. MahaAD outperformed a supervised approach trained on the same kind of corruptions and achieved the best performance in detecting OoD cases from a collection of iiOCT samples with real-world corruptions. CONCLUSION: The results indicate that detecting corrupted iiOCT data through OoD detection is feasible and does not need prior knowledge of possible corruptions. Consequently, MahaAD could aid in ensuring patient safety during robotically guided microsurgery by preventing deployed prediction models from estimating distances that put the patient at risk.


Assuntos
Microcirurgia , Retina , Animais , Suínos , Microcirurgia/métodos , Retina/diagnóstico por imagem , Retina/cirurgia , Aprendizado de Máquina , Tomografia de Coerência Óptica/métodos
18.
Int J Comput Assist Radiol Surg ; 18(7): 1185-1192, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37184768

RESUMO

PURPOSE: Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. METHOD: We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. RESULTS: Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. CONCLUSION: We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally invasive surgery.


Assuntos
Algoritmos , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Endoscopia/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Endoscópios
19.
Sci Rep ; 13(1): 19667, 2023 11 11.
Artigo em Inglês | MEDLINE | ID: mdl-37952011

RESUMO

Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.


Assuntos
Retinopatia Diabética , Degeneração Macular , Edema Macular , Humanos , Retinopatia Diabética/diagnóstico por imagem , Edema Macular/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Degeneração Macular/diagnóstico por imagem , Biomarcadores
20.
Eur J Radiol ; 167: 111047, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37690351

RESUMO

PURPOSE: To evaluate the effectiveness of automated liver segmental volume quantification and calculation of the liver segmental volume ratio (LSVR) on a non-contrast T1-vibe Dixon liver MRI sequence using a deep learning segmentation pipeline. METHOD: A dataset of 200 liver MRI with a non-contrast 3 mm T1-vibe Dixon sequence was manually labeledslice-by-sliceby an expert for Couinaud liver segments, while portal and hepatic veins were labeled separately. A convolutional neural networkwas trainedusing 170 liver MRI for training and 30 for evaluation. Liver segmental volumes without liver vessels were retrieved and LSVR was calculated as the liver segmental volumes I-III divided by the liver segmental volumes IV-VIII. LSVR was compared with the expert manual LSVR calculation and the LSVR calculated on CT scans in 30 patients with CT and MRI within 6 months. RESULTS: Theconvolutional neural networkclassified the Couinaud segments I-VIII with an average Dice score of 0.770 ± 0.03, ranging between 0.726 ± 0.13 (segment IVb) and 0.810 ± 0.09 (segment V). The calculated mean LSVR with liver MRI unseen by the model was 0.32 ± 0.14, as compared with manually quantified LSVR of 0.33 ± 0.15, resulting in a mean absolute error (MAE) of 0.02. A comparable LSVR of 0.35 ± 0.14 with a MAE of 0.04 resulted with the LSRV retrieved from the CT scans. The automated LSVR showed significant correlation with the manual MRI LSVR (Spearman r = 0.97, p < 0.001) and CT LSVR (Spearman r = 0.95, p < 0.001). CONCLUSIONS: A convolutional neural network allowed for accurate automated liver segmental volume quantification and calculation of LSVR based on a non-contrast T1-vibe Dixon sequence.


Assuntos
Aprendizado Profundo , Humanos , Fígado/diagnóstico por imagem , Radiografia , Cintilografia , Imageamento por Ressonância Magnética
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA