Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Eye (Lond) ; 38(3): 537-544, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37670143

RESUMO

PURPOSE: To validate a deep learning algorithm for automated intraretinal fluid (IRF), subretinal fluid (SRF) and neovascular pigment epithelium detachment (nPED) segmentations in neovascular age-related macular degeneration (nAMD). METHODS: In this IRB-approved study, optical coherence tomography (OCT) data from 50 patients (50 eyes) with exudative nAMD were retrospectively analysed. Two models, A1 and A2, were created based on gradings from two masked readers, R1 and R2. Area under the curve (AUC) values gauged detection performance, and quantification between readers and models was evaluated using Dice and correlation (R2) coefficients. RESULTS: The deep learning-based algorithms had high accuracies for all fluid types between all models and readers: per B-scan IRF AUCs were 0.953, 0.932, 0.990, 0.942 for comparisons A1-R1, A1-R2, A2-R1 and A2-R2, respectively; SRF AUCs were 0.984, 0.974, 0.987, 0.979; and nPED AUCs were 0.963, 0.969, 0.961 and 0.966. Similarly, the R2 coefficients for IRF were 0.973, 0.974, 0.889 and 0.973; SRF were 0.928, 0.964, 0.965 and 0.998; and nPED were 0.908, 0.952, 0.839 and 0.905. The Dice coefficients for IRF averaged 0.702, 0.667, 0.649 and 0.631; for SRF were 0.699, 0.651, 0.692 and 0.701; and for nPED were 0.636, 0.703, 0.719 and 0.775. In an inter-observer comparison between manual readers R1 and R2, the R2 coefficient was 0.968 for IRF, 0.960 for SRF, and 0.906 for nPED, with Dice coefficients of 0.692, 0.660 and 0.784 for the same features. CONCLUSIONS: Our deep learning-based method applied on nAMD can segment critical OCT features with performance akin to manual grading.


Assuntos
Aprendizado Profundo , Degeneração Macular , Descolamento Retiniano , Degeneração Macular Exsudativa , Humanos , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos , Líquido Sub-Retiniano , Degeneração Macular/tratamento farmacológico , Degeneração Macular Exsudativa/diagnóstico por imagem , Degeneração Macular Exsudativa/tratamento farmacológico , Inibidores da Angiogênese/uso terapêutico , Ranibizumab/uso terapêutico , Injeções Intravítreas
2.
Retina ; 43(3): 433-443, 2023 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-36705991

RESUMO

PURPOSE: To evaluate a prototype home optical coherence tomography device and automated analysis software for detection and quantification of retinal fluid relative to manual human grading in a cohort of patients with neovascular age-related macular degeneration. METHODS: Patients undergoing anti-vascular endothelial growth factor therapy were enrolled in this prospective observational study. In 136 optical coherence tomography scans from 70 patients using the prototype home optical coherence tomography device, fluid segmentation was performed using automated analysis software and compared with manual gradings across all retinal fluid types using receiver-operating characteristic curves. The Dice similarity coefficient was used to assess the accuracy of segmentations, and correlation of fluid areas quantified end point agreement. RESULTS: Fluid detection per B-scan had area under the receiver-operating characteristic curves of 0.95, 0.97, and 0.98 for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid, respectively. On a per volume basis, the values for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid were 0.997, 0.998, and 0.998, respectively. The average Dice similarity coefficient values across all B-scans were 0.64, 0.73, and 0.74, and the coefficients of determination were 0.81, 0.93, and 0.97 for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid, respectively. CONCLUSION: Home optical coherence tomography device images assessed using the automated analysis software showed excellent agreement to manual human grading.


Assuntos
Degeneração Macular , Degeneração Macular Exsudativa , Humanos , Tomografia de Coerência Óptica/métodos , Retina , Líquido Sub-Retiniano , Software , Degeneração Macular/diagnóstico , Inibidores da Angiogênese
3.
Ophthalmic Surg Lasers Imaging Retina ; 53(4): 208-214, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35417293

RESUMO

BACKGROUND AND OBJECTIVE: To determine whether an automated artificial intelligence (AI) model could assess macular hole (MH) volume on swept-source optical coherence tomography (OCT) images. PATIENTS AND METHODS: This was a proof-of-concept consecutive case series. Patients with an idiopathic full-thickness MH undergoing pars plana vitrectomy surgery with 1 year of follow-up were considered for inclusion. MHs were manually graded by a vitreoretinal surgeon from preoperative OCT images to delineate MH volume. This information was used to train a fully three-dimensional convolutional neural network for automatic segmentation. The main outcome was the correlation of manual MH volume to automated volume segmentation. RESULTS: The correlation between manual and automated MH volume was R2 = 0.94 (n = 24). Automated MH volume demonstrated a higher correlation to change in visual acuity from preoperative to the postoperative 1-year time point compared with the minimum linear diameter (volume: R2 = 0.53; minimum linear diameter: R2 = 0.39). CONCLUSION: MH automated volume segmentation on OCT imaging demonstrated high correlation to manual MH volume measurements. [Ophthalmic Surg Lasers Imaging Retina. 2022;53(4):208-214.].


Assuntos
Aprendizado Profundo , Perfurações Retinianas , Inteligência Artificial , Humanos , Perfurações Retinianas/diagnóstico por imagem , Perfurações Retinianas/cirurgia , Estudos Retrospectivos , Tomografia de Coerência Óptica/métodos , Vitrectomia/métodos
4.
Artigo em Inglês | MEDLINE | ID: mdl-17354804

RESUMO

Computer-aided detection (CAD) has become increasingly common in recent years as a tool in catching breast cancer in its early, more treatable stages. More and more breast centers are using CAD as studies continue to demonstrate its effectiveness. As the technology behind CAD improves, so do its results and its impact on society. In trying to improve the sensitivity and specificity of CAD algorithms, a good deal of work has been done on feature extraction, the generation of mathematical representations of mammographic features which can help distinguish true cancerous lesions from false ones. One feature that is not currently seen in the literature that physicians rely on in making their decisions is location within the breast. This is a difficult feature to calculate as it requires a good deal of prior knowledge as well as some way of accounting for the tremendous variability present in breast shapes. In this paper, we present a method for the generation and implementation of a probabilistic breast cancer atlas. We then validate this method on data from the Digital Database for Screening Mammography (DDSM).


Assuntos
Algoritmos , Anatomia Artística/métodos , Neoplasias da Mama/diagnóstico por imagem , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Mamografia/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Inteligência Artificial , Simulação por Computador , Interpretação Estatística de Dados , Bases de Dados Factuais , Feminino , Humanos , Aumento da Imagem/métodos , Ilustração Médica , Modelos Anatômicos , Modelos Biológicos , Modelos Estatísticos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
5.
IEEE Trans Med Imaging ; 24(11): 1441-54, 2005 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16279081

RESUMO

Generation of digitally reconstructed radiographs (DRRs) is computationally expensive and is typically the rate-limiting step in the execution time of intensity-based two-dimensional to three-dimensional (2D-3D) registration algorithms. We address this computational issue by extending the technique of light field rendering from the computer graphics community. The extension of light fields, which we call attenuation fields (AFs), allows most of the DRR computation to be performed in a preprocessing step; after this precomputation step, DRRs can be generated substantially faster than with conventional ray casting. We derive expressions for the physical sizes of the two planes of an AF necessary to generate DRRs for a given X-ray camera geometry and all possible object motion within a specified range. Because an AF is a ray-based data structure, it is substantially more memory efficient than a huge table of precomputed DRRs because it eliminates the redundancy of replicated rays. Nonetheless, an AF can require substantial memory, which we address by compressing it using vector quantization. We compare DRRs generated using AFs (AF-DRRs) to those generated using ray casting (RC-DRRs) for a typical C-arm geometry and computed tomography images of several anatomic regions. They are quantitatively very similar: the median peak signal-to-noise ratio of AF-DRRs versus RC-DRRs is greater than 43 dB in all cases. We perform intensity-based 2D-3D registration using AF-DRRs and RC-DRRs and evaluate registration accuracy using gold-standard clinical spine image data from four patients. The registration accuracy and robustness of the two methods is virtually identical whereas the execution speed using AF-DRRs is an order of magnitude faster.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Intensificação de Imagem Radiográfica/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Coluna Vertebral/diagnóstico por imagem , Técnica de Subtração , Cirurgia Assistida por Computador/métodos , Sistemas Computacionais , Humanos , Reprodutibilidade dos Testes , Espalhamento de Radiação , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador , Coluna Vertebral/cirurgia
6.
IEEE Trans Med Imaging ; 24(11): 1455-68, 2005 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-16279082

RESUMO

Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Movimento , Neuronavegação/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiocirurgia/métodos , Técnica de Subtração , Artefatos , Inteligência Artificial , Sistemas Computacionais , Humanos , Intensificação de Imagem Radiográfica/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
7.
Acad Radiol ; 12(1): 37-50, 2005 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-15691724

RESUMO

RATIONALE AND OBJECTIVES: The two-dimensional (2D)-three dimensional (3D) registration of a computed tomography image to one or more x-ray projection images has a number of image-guided therapy applications. In general, fiducial marker-based methods are fast, accurate, and robust, but marker implantation is not always possible, often is considered too invasive to be clinically acceptable, and entails risk. There also is the unresolved issue of whether it is acceptable to leave markers permanently implanted. Intensity-based registration methods do not require the use of markers and can be automated because such geometric features as points and surfaces do not need to be segmented from the images. However, for spine images, intensity-based methods are susceptible to local optima in the cost function and thus need initial transformations that are close to the correct transformation. MATERIALS AND METHODS: In this report, we propose a hybrid similarity measure for 2D-3D registration that is a weighted combination of an intensity-based similarity measure (mutual information) and a point-based measure using one fiducial marker. We evaluate its registration accuracy and robustness by using gold-standard clinical spine image data from four patients. RESULTS: Mean registration errors for successful registrations for the four patients were 1.3 and 1.1 mm for the intensity-based and hybrid similarity measures, respectively. Whereas the percentage of successful intensity-based registrations (registration error < 2.5 mm) decreased rapidly as the initial transformation got further from the correct transformation, the incorporation of a single marker produced successful registrations more than 99% of the time independent of the initial transformation. CONCLUSION: The use of one fiducial marker reduces 2D-3D spine image registration error slightly and improves robustness substantially. The findings are potentially relevant for image-guided therapy. If one marker is sufficient to obtain clinically acceptable registration accuracy and robustness, as the preliminary results using the proposed hybrid similarity measure suggest, the marker can be placed on a spinous process, which could be accomplished without penetrating muscle or using fluoroscopic guidance, and such a marker could be removed relatively easily.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Vértebras Cervicais/diagnóstico por imagem , Desenho de Equipamento , Humanos , Processamento de Imagem Assistida por Computador/instrumentação , Radiocirurgia/instrumentação , Radiocirurgia/métodos , Doenças da Coluna Vertebral/cirurgia , Coluna Vertebral/diagnóstico por imagem , Cirurgia Assistida por Computador/instrumentação , Vértebras Torácicas/diagnóstico por imagem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA