Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Comput Biol Med ; 170: 108047, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38295476

RESUMO

Retinal vessel segmentation plays a crucial role in the diagnosis and treatment of ocular pathologies. Current methods have limitations in feature fusion and face challenges in simultaneously capturing global and local features from fundus images. To address these issues, this study introduces a hybrid network named CoVi-Net, which combines convolutional neural networks and vision transformer. In our proposed model, we have integrated a novel module for local and global feature aggregation (LGFA). This module facilitates remote information interaction while retaining the capability to effectively gather local information. In addition, we introduce a bidirectional weighted feature fusion module (BWF). Recognizing the variations in semantic information across layers, we allocate adjustable weights to different feature layers for adaptive feature fusion. BWF employs a bidirectional fusion strategy to mitigate the decay of effective information. We also incorporate horizontal and vertical connections to enhance feature fusion and utilization across various scales, thereby improving the segmentation of multiscale vessel images. Furthermore, we introduce an adaptive lateral feature fusion (ALFF) module that refines the final vessel segmentation map by enriching it with more semantic information from the network. In the evaluation of our model, we employed three well-established retinal image databases (DRIVE, CHASEDB1, and STARE). Our experimental results demonstrate that CoVi-Net outperforms other state-of-the-art techniques, achieving a global accuracy of 0.9698, 0.9756, and 0.9761 and an area under the curve of 0.9880, 0.9903, and 0.9915 on DRIVE, CHASEDB1, and STARE, respectively. We conducted ablation studies to assess the individual effectiveness of the three modules. In addition, we examined the adaptability of our CoVi-Net model for segmenting lesion images. Our experiments indicate that our proposed model holds promise in aiding the diagnosis of retinal vascular disorders.


Assuntos
Redes Neurais de Computação , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagem , Bases de Dados Factuais , Fundo de Olho , Semântica , Processamento de Imagem Assistida por Computador
2.
Biomed Opt Express ; 14(9): 4739-4758, 2023 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-37791275

RESUMO

Precise segmentation of retinal vessels plays an important role in computer-assisted diagnosis. Deep learning models have been applied to retinal vessel segmentation, but the efficacy is limited by the significant scale variation of vascular structures and the intricate background of retinal images. This paper supposes a cross-channel spatial attention U-Net (CCS-UNet) for accurate retinal vessel segmentation. In comparison to other models based on U-Net, our model employes a ResNeSt block for the encoder-decoder architecture. The block has a multi-branch structure that enables the model to extract more diverse vascular features. It facilitates weight distribution across channels through the incorporation of soft attention, which effectively aggregates contextual information in vascular images. Furthermore, we suppose an attention mechanism within the skip connection. This mechanism serves to enhance feature integration across various layers, thereby mitigating the degradation of effective information. It helps acquire cross-channel information and enhance the localization of regions of interest, ultimately leading to improved recognition of vascular structures. In addition, the feature fusion module (FFM) module is used to provide semantic information for a more refined vascular segmentation map. We evaluated CCS-UNet based on five benchmark retinal image datasets, DRIVE, CHASEDB1, STARE, IOSTAR and HRF. Our proposed method exhibits superior segmentation efficacy compared to other state-of-the-art techniques with a global accuracy of 0.9617/0.9806/0.9766/0.9786/0.9834 and AUC of 0.9863/0.9894/0.9938/0.9902/0.9855 on DRIVE, CHASEDB1, STARE, IOSTAR and HRF respectively. Ablation studies are also performed to evaluate the the relative contributions of different architectural components. Our proposed model is potential for diagnostic aid of retinal diseases.

3.
Eye (Lond) ; 37(6): 1080-1087, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-35437003

RESUMO

OBJECTIVES: To develop and validate an end-to-end region-based deep convolutional neural network (R-DCNN) to jointly segment the optic disc (OD) and optic cup (OC) in retinal fundus images for precise cup-to-disc ratio (CDR) measurement and glaucoma screening. METHODS: In total, 2440 retinal fundus images were retrospectively obtained from 2033 participants. An R-DCNN was presented for joint OD and OC segmentation, where the OD and OC segmentation problems were formulated into object detection problems. We compared R-DCNN's segmentation performance on our in-house dataset with that of four ophthalmologists while performing quantitative, qualitative and generalization analyses on the publicly available both DRISHIT-GS and RIM-ONE v3 datasets. The Dice similarity coefficient (DC), Jaccard coefficient (JC), overlapping error (E), sensitivity (SE), specificity (SP) and area under the curve (AUC) were measured. RESULTS: On our in-house dataset, the proposed model achieved a 98.51% DC and a 97.07% JC for OD segmentation, and a 97.63% DC and a 95.39% JC for OC segmentation, achieving a performance level comparable to that of the ophthalmologists. On the DRISHTI-GS dataset, our approach achieved 97.23% and 94.17% results in DC and JC results for OD segmentation, respectively, while it achieved a 94.56% DC and an 89.92% JC for OC segmentation. Additionally, on the RIM-ONE v3 dataset, our model generated DC and JC values of 96.89% and 91.32% on the OD segmentation task, respectively, whereas the DC and JC values acquired for OC segmentation were 88.94% and 78.21%, respectively. CONCLUSION: The proposed approach achieved very encouraging performance on the OD and OC segmentation tasks, as well as in glaucoma screening. It has the potential to serve as a useful tool for computer-assisted glaucoma screening.


Assuntos
Aprendizado Profundo , Glaucoma , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagem , Glaucoma/diagnóstico , Estudos Retrospectivos , Fundo de Olho
4.
Entropy (Basel) ; 24(12)2022 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-36554161

RESUMO

Accurate segmentation of lung nodules from pulmonary computed tomography (CT) slices plays a vital role in the analysis and diagnosis of lung cancer. Convolutional Neural Networks (CNNs) have achieved state-of-the-art performance in the automatic segmentation of lung nodules. However, they are still challenged by the large diversity of segmentation targets, and the small inter-class variances between the nodule and its surrounding tissues. To tackle this issue, we propose a features complementary network according to the process of clinical diagnosis, which made full use of the complementarity and facilitation among lung nodule location information, global coarse area, and edge information. Specifically, we first consider the importance of global features of nodules in segmentation and propose a cross-scale weighted high-level feature decoder module. Then, we develop a low-level feature decoder module for edge feature refinement. Finally, we construct a complementary module to make information complement and promote each other. Furthermore, we weight pixels located at the nodule edge on the loss function and add an edge supervision to the deep supervision, both of which emphasize the importance of edges in segmentation. The experimental results demonstrate that our model achieves robust pulmonary nodule segmentation and more accurate edge segmentation.

5.
Eye (Lond) ; 36(7): 1433-1441, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34211137

RESUMO

OBJECTIVES: To present and validate a deep ensemble algorithm to detect diabetic retinopathy (DR) and diabetic macular oedema (DMO) using retinal fundus images. METHODS: A total of 8739 retinal fundus images were collected from a retrospective cohort of 3285 patients. For detecting DR and DMO, a multiple improved Inception-v4 ensembling approach was developed. We measured the algorithm's performance and made a comparison with that of human experts on our primary dataset, while its generalization was assessed on the publicly available Messidor-2 dataset. Also, we investigated systematically the impact of the size and number of input images used in training on model's performance, respectively. Further, the time budget of training/inference versus model performance was analyzed. RESULTS: On our primary test dataset, the model achieved an 0.992 (95% CI, 0.989-0.995) AUC corresponding to 0.925 (95% CI, 0.916-0.936) sensitivity and 0.961 (95% CI, 0.950-0.972) specificity for referable DR, while the sensitivity and specificity for ophthalmologists ranged from 0.845 to 0.936, and from 0.912 to 0.971, respectively. For referable DMO, our model generated an AUC of 0.994 (95% CI, 0.992-0.996) with a 0.930 (95% CI, 0.919-0.941) sensitivity and 0.971 (95% CI, 0.965-0.978) specificity, whereas ophthalmologists obtained sensitivities ranging between 0.852 and 0.946, and specificities ranging between 0.926 and 0.985. CONCLUSION: This study showed that the deep ensemble model exhibited excellent performance in detecting DR and DMO, and had good robustness and generalization, which could potentially help support and expand DR/DMO screening programs.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Edema Macular , Retinopatia Diabética/diagnóstico , Fundo de Olho , Humanos , Edema Macular/diagnóstico , Estudos Retrospectivos
7.
Graefes Arch Clin Exp Ophthalmol ; 258(4): 851-867, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31989285

RESUMO

PURPOSE: To develop a deep learning approach based on deep residual neural network (ResNet101) for the automated detection of glaucomatous optic neuropathy (GON) using color fundus images, understand the process by which the model makes predictions, and explore the effect of the integration of fundus images and the medical history data from patients. METHODS: A total of 34,279 fundus images and the corresponding medical history data were retrospectively collected from cohorts of 2371 adult patients, and these images were labeled by 8 glaucoma experts, in which 26,585 fundus images (12,618 images with GON-confirmed eyes, 1114 images with GON-suspected eyes, and 12,853 NORMAL eye images) were included. We adopted 10-fold cross-validation strategy to train and optimize our model. This model was tested in an independent testing dataset consisting of 3481 images (1524 images from NORMAL eyes, 1442 images from GON-confirmed eyes, and 515 images from GON-suspected eyes) from 249 patients. Moreover, the performance of the best model was compared with results obtained by two experts. Accuracy, sensitivity, specificity, kappa value, and area under receiver operating characteristic (AUC) were calculated. Further, we performed qualitative evaluation of model predictions and occlusion testing. Finally, we assessed the effect of integrating medical history data in the final classification. RESULTS: In a multiclass comparison between GON-confirmed eyes, GON-suspected eyes and NORMAL eyes, our model achieved 0.941 (95% confidence interval [CI], 0.936-0.946) accuracy, 0.957 (95% CI, 0.953-0.961) sensitivity, and 0.929 (95% CI, 0.923-0.935) specificity. The AUC distinguishing referrals (GON-confirmed and GON-suspected eyes) from observation was 0.992 (95% CI, 0.991-0.993). Our best model had a kappa value of 0.927, while the two experts' kappa values were 0.928 and 0.925 independently. The best 2 binary classifiers distinguishing GON-confirmed/GON-suspected eyes from NORMAL eyes obtained 0.955, 0.965 accuracy, 0.977, 0.998 sensitivity, and 0.929, 0.954 specificity, while the AUC was 0.992, 0.999 respectively. Additionally, the occlusion testing showed that our model identified the neuroretinal rim region, retinal nerve fiber layer (RNFL) defect areas (superior or inferior) as the most important parts for the discrimination of GON, which evaluated fundus images in a way similar to clinicians. Finally, the results of integration of fundus images with medical history data showed a slight improvement in sensitivity and specificity with similar AUCs. CONCLUSIONS: This approach could discriminate GON with high accuracy, sensitivity, specificity, and AUC using color fundus photographs. It may provide a second opinion on the diagnosis of glaucoma to the specialist quickly, efficiently and at low cost, and assist doctors and the public in large-scale screening for glaucoma.


Assuntos
Aprendizado Profundo , Técnicas de Diagnóstico Oftalmológico , Glaucoma/complicações , Pressão Intraocular/fisiologia , Redes Neurais de Computação , Disco Óptico/patologia , Doenças do Nervo Óptico/diagnóstico , Glaucoma/diagnóstico , Humanos , Doenças do Nervo Óptico/etiologia , Curva ROC , Células Ganglionares da Retina/patologia , Estudos Retrospectivos , Tomografia de Coerência Óptica
8.
Biomed Opt Express ; 10(12): 6204-6226, 2019 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-31853395

RESUMO

Retinal disease classification is a significant problem in computer-aided diagnosis (CAD) for medical applications. This paper is focused on a 4-class classification problem to automatically detect choroidal neovascularization (CNV), diabetic macular edema (DME), DRUSEN, and NORMAL in optical coherence tomography (OCT) images. The proposed classification algorithm adopted an ensemble of four classification model instances to identify retinal OCT images, each of which was based on an improved residual neural network (ResNet50). The experiment followed a patient-level 10-fold cross-validation process, on development retinal OCT image dataset. The proposed approach achieved 0.973 (95% confidence interval [CI], 0.971-0.975) classification accuracy, 0.963 (95% CI, 0.960-0.966) sensitivity, and 0.985 (95% CI, 0.983-0.987) specificity at the B-scan level, achieving a matching or exceeding performance to that of ophthalmologists with significant clinical experience. Other performance measures used in the study were the area under receiver operating characteristic curve (AUC) and kappa value. The observations of the study implied that multi-ResNet50 ensembling was a useful technique when the availability of medical images was limited. In addition, we performed qualitative evaluation of model predictions, and occlusion testing to understand the decision-making process of our model. The paper provided an analytical discussion on misclassification and pathology regions identified by the occlusion testing also. Finally, we explored the effect of the integration of retinal OCT images and medical history data from patients on model performance.

9.
Transl Vis Sci Technol ; 8(6): 4, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31737428

RESUMO

PURPOSE: To achieve automatic diabetic retinopathy (DR) detection in retinal fundus photographs through the use of a deep transfer learning approach using the Inception-v3 network. METHODS: A total of 19,233 eye fundus color numerical images were retrospectively obtained from 5278 adult patients presenting for DR screening. The 8816 images passed image-quality review and were graded as no apparent DR (1374 images), mild nonproliferative DR (NPDR) (2152 images), moderate NPDR (2370 images), severe NPDR (1984 images), and proliferative DR (PDR) (936 images) by eight retinal experts according to the International Clinical Diabetic Retinopathy severity scale. After image preprocessing, 7935 DR images were selected from the above categories as a training dataset, while the rest of the images were used as validation dataset. We introduced a 10-fold cross-validation strategy to assess and optimize our model. We also selected the publicly independent Messidor-2 dataset to test the performance of our model. For discrimination between no referral (no apparent DR and mild NPDR) and referral (moderate NPDR, severe NPDR, and PDR), we also computed prediction accuracy, sensitivity, specificity, area under the receiver operating characteristic curve (AUC), and κ value. RESULTS: The proposed approach achieved a high classification accuracy of 93.49% (95% confidence interval [CI], 93.13%-93.85%), with a 96.93% sensitivity (95% CI, 96.35%-97.51%) and a 93.45% specificity (95% CI, 93.12%-93.79%), while the AUC was up to 0.9905 (95% CI, 0.9887-0.9923) on the independent test dataset. The κ value of our best model was 0.919, while the three experts had κ values of 0.906, 0.931, and 0.914, independently. CONCLUSIONS: This approach could automatically detect DR with excellent sensitivity, accuracy, and specificity and could aid in making a referral recommendation for further evaluation and treatment with high reliability. TRANSLATIONAL RELEVANCE: This approach has great value in early DR screening using retinal fundus photographs.

10.
Microsc Res Tech ; 82(9): 1621-1627, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31264320

RESUMO

Microscopic imaging of uneven surfaces is difficult because of the limited depth of field. In this study, we developed a rapid auto-focus method for uneven surfaces based on image fusion. The Prewitt operator was used to detect the vertical edges of the images. Then, the focus position was theoretically calculated using a Gaussian function, and image fusion was applied to obtain the final in-focus image. An experiment was designed to verify the developed method. The results revealed that this method is effective for printed circuit boards.

11.
Vision Res ; 160: 52-59, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31095964

RESUMO

The human lens is considered to have a gradient refractive index (GRIN) distribution. The recently developed accommodating volume-constant age-dependent optical (AVOCADO) model can accurately describe the separate GRIN distributions in the axial and radial directions. Our study uses a finite element method to simulate the accommodation process and calculate the GRIN redistribution based on the AVOCADO model for 25-, 35- and 45-year-old lenses. The parameter p describes the steepness of the GRIN profile towards the lens periphery. The results show that axial p values increase with age. Under accommodation, the axial p value increases, while the radial p value decreases. We also use a ray tracing method to evaluate the optical performance of the lens. The aim of this paper is thus to provide an anatomically finite mechanical lens model with separate axial and radial refractive index profiles for a better understanding of accommodation at different ages.


Assuntos
Acomodação Ocular/fisiologia , Cristalino/fisiologia , Refração Ocular/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Modelos Anatômicos
12.
Ophthalmic Res ; 62(1): 1-10, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31141806

RESUMO

PURPOSE: To compare the choroidal thickness (CT) measured by enhanced depth imaging optical coherence tomography (EDI-OCT) in preeclamptic, healthy pregnant, and healthy nonpregnant women. METHODS: Studies that focused on the CT evaluation of pregnant women were retrieved by searching PubMed, Embase, Ovid, Cochrane, and Web of Science. We used Stata 14.0 SE for the meta-analysis and presented the results as the weighted mean difference (WMD) with a corresponding 95% CI. RESULTS: A total of 14 studies with 1,227 participants were included in our meta-analysis. The CT of the healthy pregnancies (µm, WMD = 34.19, 95% CI: 20.63-47.76) was significantly higher than that of the nonpregnancies (Test of WMD = 0: z = 4.94, p = 0.000), but the CT of the preeclampsia (µm, WMD = 54.30, 95% CI: -13.40 to 122.01) was not significantly different from the nonpregnancies (Test of WMD = 0: z = 1.57, p = 0.116). In the preeclampsia versus healthy pregnancy group, 3 studies found that the choroid was thinner with preeclampsia, only one study found the CT increased. CONCLUSIONS: This meta-analysis suggested that the CT of the healthy pregnant women was significantly higher than that of the nonpregnant women. The presence of preeclampsia might complicate this situation. Most studies found that the CT decreased in the preeclamptic patients because of the increases in the systemic vasospasm and the blood pressure, which led to no significant difference compared with the nonpregnant women.


Assuntos
Corioide/patologia , Pré-Eclâmpsia/patologia , Pressão Sanguínea/fisiologia , Estudos de Casos e Controles , Feminino , Humanos , Pressão Intraocular/fisiologia , Pré-Eclâmpsia/fisiopatologia , Gravidez , Tomografia de Coerência Óptica/métodos
13.
Cornea ; 36(3): 310-316, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28002108

RESUMO

PURPOSE: To evaluate the corneal biomechanical properties of patients who have undergone penetrating keratoplasty (PK) or deep anterior lamellar keratoplasty (DALK) using the ocular response analyzer. METHODS: Stata 13.0 SE was used for this meta-analysis. Studies in the literature that focused on corneal hysteresis (CH) or corneal resistance factor (CRF) after PK or DALK were retrieved by searching PubMed, Embase, Ovid, and Cochrane databases. We present the results as weighted mean difference (WMD) with a corresponding 95% confidence interval (CI). RESULTS: Eight studies with a total of 750 eyes were included in the post-PK versus control group, and 4 studies with a total of 218 eyes were included in the post-DALK versus control group. The pooled results showed that CH and CRF were significantly reduced (P < 0.00001) for patients who have undergone PK (WMD = -1.16, 95% CI: -1.73 to -0.60 and WMD = -1.00, 95% CI: -1.61 to -0.40). No significant differences were found in both CH and CRF for patients who have undergone DALK (WMD = -0.27, 95% CI: -0.64 to -0.09 and WMD = -0.15, 95% CI: -0.53 to 0.23). CONCLUSIONS: This meta-analysis suggested that both CH and CRF had better recovery after corneal transplantation with DALK than PK.


Assuntos
Córnea/fisiopatologia , Doenças da Córnea/cirurgia , Transplante de Córnea , Elasticidade/fisiologia , Ceratoplastia Penetrante , Fenômenos Biomecânicos , Córnea/cirurgia , Doenças da Córnea/fisiopatologia , Técnicas de Diagnóstico Oftalmológico , Humanos , Recuperação de Função Fisiológica/fisiologia , Acuidade Visual/fisiologia
14.
Biomed Opt Express ; 7(9): 3220-3229, 2016 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-27699094

RESUMO

We developed a spectral-domain visible-light optical coherence tomography (VIS-OCT) based multimodal imaging technique which can accomplish simultaneous OCT and fluorescence imaging with a single broadband light source. Phantom experiments showed that by using the simultaneously acquired OCT images as a reference, the effect of light attenuation on the intensity of the fluorescent images by materials in front of the fluorescent target can be compensated. This capability of the multimodal imaging technique is of high importance for achieving quantification of the true intensities of autofluorescence (AF) imaging of the retina. We applied the technique in retinal imaging including AF imaging of the retinal pigment epithelium and fluorescein angiography (FA). We successfully demonstrated the effect of compensation on AF and FA images with the simultaneously acquired VIS-OCT images.

15.
Br J Ophthalmol ; 100(1): 9-14, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25677672

RESUMO

PURPOSE: To evaluate the diagnostic performance of corneal confocal microscopy (CCM) in assessing corneal nerve parameters in patients with diabetic peripheral neuropathy (DPN). METHODS: Studies in the literature that focused on CCM and DPN were retrieved by searching PubMed, Excerpt Medica Database (EMBASE) and China National Knowledge Infrastructure (CNKI) databases. RevMan V.5.3 software was used for the meta-analysis. The results are presented as weighted mean difference (WMD) with a corresponding 95% CI. RESULTS: 13 studies with a total of 1680 participants were included in the meta-analysis. The pooled results showed that the corneal nerve fibre density, nerve branch density and nerve fibre length were significantly reduced (all p<0.00001) in the patients with DPN compared with healthy controls ((WMD=-18.07, 95% CI -21.93 to -14.20), (WMD=-25.35, 95% CI -30.96 to -19.74) and (WMD=-6.37, 95% CI -7.44 to -5.30)) and compared with the diabetic patients without DPN ((WMD=-8.83, 95% CI -11.49 to -6.17), (WMD=-13.54, 95% CI -20.41 to -6.66) and (WMD=-4.19, 95% CI -5.35 to -3.04)), respectively. No significant difference was found in the corneal nerve fibre tortuosity coefficient between diabetic patients with DPN and healthy controls (p=0.80) or diabetic patients without DPN (p=0.61). CONCLUSIONS: This meta-analysis suggested that CCM may be valuable for detecting and assessing early nerve damage in DPN patients.


Assuntos
Córnea/inervação , Neuropatias Diabéticas/diagnóstico , Microscopia Confocal , Doenças do Nervo Trigêmeo/diagnóstico , Humanos , Fibras Nervosas/patologia
16.
Biomed Opt Express ; 5(12): 4242-8, 2014 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-25574436

RESUMO

We accomplished spectral domain optical coherence tomography and auto-fluorescence microscopy for imaging the retina with a single broadband light source centered at 480 nm. This technique is able to provide simultaneous structural imaging and lipofuscin molecular contrast of the retina. Since the two imaging modalities are provided by the same group of photons, their images are intrinsically registered. To test the capabilities of the technique we periodically imaged the retinas of the same rats for four weeks. The images successfully demonstrated lipofuscin accumulation in the retinal pigment epithelium with aging. The experimental results showed that the dual-modal imaging system can be a potentially powerful tool in the study of age-related degenerative retinal diseases.

17.
Opt Express ; 22(25): 31237-47, 2014 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-25607072

RESUMO

An auto-focus method for digital imaging systems is proposed that combines depth from focus (DFF) and improved depth from defocus (DFD). The traditional DFD method is improved to become more rapid, which achieves a fast initial focus. The defocus distance is first calculated by the improved DFD method. The result is then used as a search step in the searching stage of the DFF method. A dynamic focusing scheme is designed for the control software, which is able to eliminate environmental disturbances and other noises so that a fast and accurate focus can be achieved. An experiment is designed to verify the proposed focusing method and the results show that the method's efficiency is at least 3-5 times higher than that of the traditional DFF method.

18.
Ophthalmic Surg Lasers Imaging ; 43(3): 252-6, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22421200

RESUMO

The authors demonstrate the feasibility and advantage of spectral-domain optical coherence tomography (SD-OCT) for single-shot ocular biometric measurement during the development of the mouse eye. A high-resolution SD-OCT system was built for single-shot imaging of the whole mouse eye in vivo. The axial resolution and imaging depth of the system are 4.5 µm (in tissue) and 5.2 mm, respectively. The system is capable of acquiring a cross-sectional OCT image consisting of 2,048 depth scans in 85 ms. The imaging capability of the SD-OCT system was validated by imaging the normal ocular growth and experimental myopia model using C57BL/6J mice. The biometric dimensions of the mouse eye can be calculated directly from one snapshot of the SD-OCT image. The biometric parameters of the mouse eye including axial length, corneal thickness, anterior chamber depth, lens thickness, vitreous chamber depth, and retinal thickness were successfully measured by the SD-OCT. In the normal ocular growth group, the axial length increased significantly from 28 to 82 days of age (P < .001). The lens thickness increased and the vitreous chamber depth decreased significantly during this period (P < .001 and P = .001, respectively). In the experimental myopia group, there were significant increases in vitreous chamber depth and axial length in comparison to the control eyes (P = .040 and P < .001, respectively). SD-OCT is capable of providing single-shot direct, fast, and high-resolution measurements of the dimensions of young and adult mouse eyes. As a result, SD-OCT is a potentially powerful tool that can be easily applied to research in eye development and myopia using small animal models.


Assuntos
Biometria/métodos , Modelos Animais de Doenças , Olho/patologia , Miopia/diagnóstico , Tomografia de Coerência Óptica/métodos , Animais , Segmento Anterior do Olho/patologia , Comprimento Axial do Olho/patologia , Estudos de Viabilidade , Camundongos , Camundongos Endogâmicos C57BL , Retina/patologia
19.
J Biomed Opt ; 16(1): 018002, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21280927

RESUMO

Analysis and applications of vision correction via accommodating intraocular lens (AIOL) are presented. By Gaussian optics, analytic formulas for the accommodation rate function (M) for two-optics and three-optics systems are derived and compared with the exact numerical results. In a single-optics AIOL, typical value of M is (0.5-1.5) D∕mm, for an IOL power of (10-20) diopter. For a given IOL power, higher M is achieved in positive-IOL than negative-IOL. In the dual-optics AIOL, maximum accommodation is predicted when the front positive-optics moves toward the corneal plan and the back negative-optics moves backward. Our analytic formulas predict that greater accommodative rate may be achieved by using a positive-powered front optics, a general feature when either front or back optics is mobile. The M function is used to find the piggy-back IOL power for customized design based on the individual ocular parameters. Many of the new features demonstrated in this study can be easily realized by our analytic formulas, but not by raytracing method.


Assuntos
Desenho Assistido por Computador , Lentes Intraoculares , Modelos Biológicos , Desenho de Prótese/métodos , Erros de Refração/fisiopatologia , Erros de Refração/reabilitação , Simulação por Computador , Humanos
20.
Opt Lett ; 35(23): 4018-20, 2010 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-21124598

RESUMO

We investigated the feasibility of simultaneously imaging two distinctive molecular contrasts provided by the absorbed photons in biological tissues with a single light source. The molecular contrasts are based on two physical effects induced by the absorbed photons: photoacoustics (PA) and autofluorescence (AF). In an integrated multimodal imaging system, the PA and AF signals were detected by a high-sensitivity ultrasonic transducer and an avalanche photodetector, respectively. The system was tested by imaging ocular tissue samples, including the retinal pigment epithelium and the ciliary body. The acquired images provided information on the spatial distributions of melanin and lipofuscin in these samples.


Assuntos
Acústica , Microscopia/métodos , Imagem Molecular/métodos , Fótons , Absorção , Animais , Humanos , Lipofuscina/metabolismo , Melaninas/metabolismo , Fenômenos Ópticos , Epitélio Pigmentado da Retina/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA